[Search for users]
[Overall Top Noters]
[List of all Conferences]
[Download this site]
Title: | Europe-Swas-Artificial-Intelligence |
|
Moderator: | HERON::BUCHANAN |
|
Created: | Fri Jun 03 1988 |
Last Modified: | Thu Aug 04 1994 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 442 |
Total number of notes: | 1429 |
192.0. "FWD: A good trip report on the 12th Intern'l Conf o" by HERON::ROACH (TANSTAAFL !) Wed Apr 18 1990 12:59
Printed by: Pat Roach Document Number: 011375
--------------------------------------------------------------------------------
I N T E R O F F I C E M E M O R A N D U M
Date: 13-Apr-1990 09:32pm CET
From: PAPAGEORGE
PAPAGEORGE@AITG@HERON@MRGATE@HUGHI
Dept:
Tel No:
TO: ROACH@A1NSTC
Subject: FWD: A good trip report on the 12th Intern'l Conf on Software Eng. Also some very good points about where the US is vs Europe in development approach
From: SELECT::STRINGER "13-Apr-1990 0756" 13-APR-1990 08:00:25.01
To: @AITC.DIS
CC: STRINGER
Subj: A good trip report on the 12th Intern'l Conf on Software Eng. Also some
very good points about where the US is vs Europe in development approach
From: TPSYS::HUTCHINGS "TONY HUTCHINGS, ISB S'WARE ENG. TECH. CENTER, 227-4381
11-Apr-1990 1244" 11-APR-1990 14:00:58.20
To: billk,mts%"core::bill strecker",@info_t
CC:
Subj: Trip reports from the 12th. Int. Conference on S'ware Engineering
+---------------------------+ TM
| | | | | | | |
| d | i | g | i | t | a | l | I N T E R O F F I C E M E M O
| | | | | | | |
+---------------------------+
TO: SETC Monthly distribution DATE: 4/6/90
CASE Cttee FROM: Tony Hutchings
CASE Forum DEPT: ISB-SSE-SETC
Complex S'ware Course attendees EXT: 227-4381
SSE MC LOC/MAIL STOP: TAY1-2/H4
FILE: icse.trip
Trip Report - Nice, France, March 27-30, 1990
----------------------------------------------
12th International Conference on Software Engineering
-----------------------------------------------------
During the week of March 26-30th, I attended 2 functions in Nice,
France - a program committee meeting for the 2nd. international conference
on Software Development Environments & Factories (London, Sept. 1990) and the
12th. ICSE. The highlights of the week were:
* This was an excellent opportunity to see where the
rest of the world (emphasis on Europe) is at in
software engineering. If I could summarize it, I
would say there is more basic research going on in
Europe than elsewhere, more emphasis there also on
collaborative ventures and techniques (Frameworks/
Environments, software factories); as ever, point
tool/technique invention is strong in the US, but
often disconnected.
* From ICSE itself, it is clear that what SETC is
advocating for a software metrics & measurement
system (full life-cycle, environmental & defect-
based metrics, fully integrated with the development
environment), along with defect detection techniques
like Formal Inspections and the use of an analysis
tool like SPR's Checkpoint), if adopted, would
represent a world-class measurement environment.
* Our 2 DEC papers for the proposed (on again, off
again) conference on Software Development
Environments & Factories) were both accepted.
* Nice is a wonderful city!
Low-lights of the week:
* The Europeans are, in my estimation, 1-2 years ahead
of us in the US in implementing/deploying integrated
tool environments/frameworks (IPSE's/PCTE's).
* They are also much more aware, and value more highly,
process in general and advanced formal methods
* The weather was awful (even cold), and international
flying is intolerable when terrorist bomb scares
send security forces into panic.
12th International Conference on Software Engineering - Trip Report
1. The 2nd. conference on Software Development Envs. & Factories
-------------------------------------------------------------
[Footnote: This conference, in its proposed form, is under-
going some changes, due to insufficient high quality
papers. Both the DEC papers - from Chris Nolan on CATIS,
and from myself and Lou Cohen on a Model Software Development
Environment - have been accepted. It now looks like there will
be a smaller conference in Berlin, same date].
The constitution of this program committee was fascinating - 4
from the UK, including the Chairman Brian Warboys, Professor
of Software Engineering at Manchester University; 4 from
France, 3 from Germany, and myself from the US. I learnt a lot
about the state of European collaboration in S'ware Development
Environments and CASE in general (see also the Keynote speech
from ICSE) and Software Factories in particular. The "hot
project" in Europe these days is clearly the Berlin-based
ESF - Eureka Software Factory - a large, 50 person project,
50% funded from the EC and 50% from European industry, with
all major EC nations participating. [A `Software Factory' is
a fully integrated CASE environment and tool set, based on
solid (international) standards (eg: PCTE+ or IPSE 2.n -tbd),
fully metricized and I believe founded on the continuous
improvement model of Deming. (This is exactly what Lou and I
were proposing in our paper to the conference, though ours is
more pragmatic in nature). The Europeans are also heavily into
formal methods (no surprise to me, here) and the VDM (Vienna
Definition Method) of Cliff Jones at Manchester University
is the most popular. While they all admit they don't know how
well this scales up to very large systems, their strong
foundation in the understanding of complexity theory, can only
do them good in the long run.
A message for DEC in general, regarding CASE tools and CASE
environments - since we are so far behind: If we don't have
it now, and it's not strategic to our business, "buy it, don't
make it"; instead, concentrate on what no-one else has, and on
building systems engineering skills to put together integrated
solutions for customers. Late, albeit better engineered "Me
Toos" rarely if ever make money.
2. The 12th. International Conference on Software Engineering (ICSE)
------------------------------------------------------------------
Keynote speech - David Talbot, European Commission, Brussels
------------------------------------------------------------
Most of Talbot's speech was content-free and political in
nature, but there were a few things to learn from and be
concerned about: "ESPRIT II" (1988 -1992) is well under way
and together with ESPRIT I will have expended 5billion ECU
(European Currency Units). @ roughly 1 ECU :: $1.2, this is
an investment of $6B over 8-10 years! It represents over
160 projects, involving over 6000 people from across Europe.
The principle work content of ESPRIT II centers around:
12th International Conference on Software Engineering - Trip Report
Environments (PCTE+/IPSE, Software Factories [ESF],
knowledge-based systems, standards)
Formal Methods (VDM, Reusability)
Quality (incl: Maintainability, Modelling, Metrics)
Productivity
Technology Transfer
The European Information Processing market is put 2nd only to
the US, and growing faster.
In general, the EC, with collaboration and consortia now by-
words in business, presents, I believe, the biggest threat to
US dominance in the Information Processing market in the 50
years of the industry's short history.
3. ICSE Program in general
-----------------------
A look at the major session topics gives one a very good idea
of what is "hot" in the world of software engineering this year:
Process modelling
Formal Methods & Verification
Metrics
Real-time & Reactive systems
Reliability
Re-Engineering
Object Management Systems
Prototyping
Technology Transfer
Systems Engineering
Configuration Management
In addition, there were 2 days of Tutorials prior to the
Conference, on topics such as Requirements Definition,
Modelling, Measuring and Managing the Software Development
Process, Computer Security, Software Reuse, Software
Performance Engineering, Software Reliability Measures, Human
Computer Interaction.
I found it odd and disturbing that there were only 2 DEC delegates
(myself and Carlos Borgialli, both from SSE. Carlos has also written
a short and insightful trip report.
3.1 Recent Advances in Software Measurement (Vic Basili, Maryland Univ)
--------------------------------------------------------------------
Vic gave a very good tutorial on this very important topic.
He contrasted the last 2 decades (ad hoc, code defect-only-
based, point techniques) with the present state of affairs
(full life-cycle, integrated, customizable). The Metrics &
Measurement system must be tailored to the type of life cycle
model and application domain. He categorized the styles of
measurement today into 3 basic types:
12th International Conference on Software Engineering - Trip Report
GQM (Goal Question Metrics) paradigm
QFD (Quality Function Deployment) paradigm
SQM (Software Quality Metrics) paradigm
Although he didn't do QFD full justice by any means (he sees it
almost exclusively as applying only to the Product dimension
and not to Process), he was otherwise accurate in positioning
it as the principle way of getting the customers' views on
functional priorities tied to development's ability to deliver;
SQM he saw principally as an assessment vehicle; GQM (no prize
for guessing which method he prefers!) relies on discovering/
understanding goals via an interviewing (Questionnaire) process
then characterizing the process and finally setting metrics.
This process is, in fact, exactly what Capers Jones' SPR has
built its Measurement process on (and which SETC is advocating
strongly also): Interview projects to establish the quality/
productivity of their environment and processes (the "Soft"
factors) and their productivity/quality ("Hard" data)
baselines; determine the set of metrics of value to each
organization from this data (for the whole life cycle) and then
integrate the measurement process into the complete software
development process. He also advocates stronlgy the use of
Deming's PDCA (Plan-Do-Check-Act) model of continuous improve-
ment.
SETC has for some time been advocating the use of Capers Jones'
process and tool (Checkpoint), the use of Contextual Inquiry,
Pugh Matrices and QFD, Formal Inspections and a base set of
defect-oriented metrics. I now firmly believe that any group
which adopts this process will have a world-class measurement
system (and more).
3.2 Environments
-------------
The first paper (from U. Mass, part of the US "Arcadia"
consortium, the US' "answer" to Europe's ESF, Software
Factory?), was all about Specification Level interoperability.
They define a constraint-based intermediate representation
(Unified Type Model - UTM), through which they can perform
language mappings (thus allowing interoperation of systems
developed in different programming languages) and by choosing
a standard (RPC) distributed procedure calling mechanism,
can support interoperation across multiple architectures/
machines. The paper showed good awareness of what inter-
operation in a heterogeneous environment is all about, and
having a specification level representation to reason about
could be useful, though I felt in the end that with NAS, we
are potentially already well ahead of this work.
12th International Conference on Software Engineering - Trip Report
The second paper was from CGE in France ("Design Decisions
for the Incremental Adage Framework"). Adage is a product
and was demonstrated at one of the booths at ICSE. It is
essentially a graphical, object-based, directed graph approach
to modelling (and implementing) software development environ-
ments. Basically, it allows you to represent your methodology
(and therefore tool to tool interconnection/flow) as a set of
nodal graphs (which, incidentally, it stores as persistent
objects and thus has a partial C++ object management system!).
Having thus represented your tool process, you can recall it,
modify it, invoke tools from it (albeit interpretively).
Another side product is GDL - Graphical Description Language -
which is also one of the first object-oriented query languages
I have seen; Adage does implement a rudimentary OODBMS (though
they disclaim it is a real one); nevertheless, it is one of
the first (Trellis may be another) OO implementations with a
persistent data store.
There is clearly much similarity between this work and parts
of PCTE+ and there was an embarassing question from the
Session chairman (an ESF member!) about this. The outcome may
well be a further acceleration of the European IPSE.
3.3 Metrics & Reliability
----------------------
The first paper was from Toshiba on their ESQUT Quality
Measurement system which covers both design metrics and a code
measurement system. While there is nothing new in this (they
base their code complexity metrics on Tom McCabe's Cyclomatic
Complexity model - which, incidentally, we also recommend - see
ISB's "Developing Software In A Complex Environment" course).
However, as ever the Japanese have taken things a step further
and built a total quality control (TQC) system out of this.
They have come up with statistical (mean, standard deviation
and quality control limitation) values which gives them curves
within which goodness lies. The only real weakness in their
approach is that it is only LOC (Lines Of Code) based. However,
while at ICSE, I came across (and bought) a book on Japanese
Software Engineering Perspectives in which it is clear they
are discovering Function Points and once they espouse this
technique (which I feel they will), they could have a very
good Quality Measurement system for software.
The second paper in this session was from AT&T - "Application
of Software Reliability Modelling to Product Quality and Test
Processes". It showed the results of deriving an operational
reliability model for a 150K line network (electronic switch)
monitoring system. From this carefully developed model (it
fully characterizes the anticipated behavior of the system
under load conditions), they developed 2 Poisson distributions
(1 logarithmic, 1 exponential) predicting the effects of
critical system failures over time; these helped yield a set
of scripts for stability testing (in our terms, Load Testing).
The theory looks excellent - in practice, they admitted a
blunder: the operational model, which should have been the
one used by their first customer, was in fact not used; instead
they used the system for training purposes and it had thus a
very different profile of use and loading and uncovered bugs
which they had not tested for. Had they known this a priori,
they could easily have developed the right set of scripts
and tested for these behaviors and probably avoided all the
system crashes they encountered.
12th International Conference on Software Engineering - Trip Report
4. "Structure of Distributed Programs" (Barbara Liskov, MIT)
----------------------------------------------------------
This was an invited, plenary talk and turned out to be a
beautifully clear exposition of what everyone in TP systems
already knows - if you observe the fundamentals of atomicity
of objects, and the notion of transactions, developing a
distributed application is much like developing a centralized
(serialized) one. Use the client-server model, determine the
degree of data consistency you require (she here gave us a tour
through 2 phase commit [2PC] and concurrency management) and
the degree of constraints necessary (eg: must all data be
delivered instantly or is deferred delivery acceptable?); use
some form of message-passing method invocation (eg: RPC with
messages) to implement the client-server invocation interface,
...and there you have it!
This was really a very well delivered talk, simple (Barbara
Liskov, for those that don't know her work, encourages data
and functional abstraction as a key building block of her
methodology) and easy to follow.
It was a pity she has not written all of this up in a paper -
she admitted this was the first time she had presented these
concepts all together in one talk - it would serve as a good
text for people entering the distributed applications domain.
On the other hand, for those of us in the GMA, Barabara is only
just "down the road" in Cambridge..... (I do now have a copy
of her presentation material).
5. Recent Advances in Object Management Systems (Francois Bancihon, GIP
---------------------------------------------------------------------
Altair, France).
---------------
This was a good (and amusuing) tutorial on the last 20 years of
DBMS development and an approximation of where OO systems are
today. He came up with 6 features that any respectable DBMS
must possess:
Persistence (of objects, after program completes)
Disk Management
Data Sharing (among many users)
Data Reliability (referential integrity, etc)
Data Security (access, privileges, roles, etc)
Ad Hoc Query capability (a la SQL)
He then characterized various existing (product and prototype)
systems against these and a long list of other attributes. The
bottom line (I left just before he had finished or taken
questions) appeared to be:
Complex Object Oriented systems "feel right" to the DB
gurus as the new generation (they add Encapsulation
[of objects and methods], Types/Classes, Inheritance,
Overloading and late binding, to existing DBMS models).
However, it will take a long time yet to fully
establish OODBMS' and nothing exists (in his opinion)
of production quality yet.
[Incidentally, DEC's RDBMS didn't even make his list of
existing relational implementations. I guess the world looks
different from France?!]
12th International Conference on Software Engineering - Trip Report
6. Getting started on Metrics - JPL Productivity & Quality (M. Bush)
------------------------------------------------------------------
Firstly, JPL (Jet Propulsion Labs) should be praised for
attempting this project - a 15 year retrospective callibration/
data collection project. They attempted to collect data on
source lines of code developed (new code only), K$'s spent,
workmonths expended (Source LOC's/wmonth), and defects/KSLOC.
The bad news for them is two-fold:
(i) The productivity and quality appear awful - an
average of 10 SLOC's per workmonth for flight
systems - somewhere between 3 and 10X lower than
comparable systems, say at IBM's Houston labs;
the quality was averaging 9 defects for KSLOC
(IBM Houston is claiming .01 defects/KSLOC!).
(ii)They are monitoring the wrong metric! They should
be monitoring "delivered functions/features", not
raw lines of code. My example:
Project A delivers 10,000 lines of
code, with 100 defects found and fixed
and a further 10 defects delivered to
the customer.
Project B delivers the same
functionality in 1000 lines of code,
with 30 defects found and fixed and 3
defects delivered to the customer.
By a LOC paradigm, Project A is more productive and
produces higher quality software, proportionally.
By a "Feature" paradigm, Project B is 10X more
productive and has 3X higher quality.
You choose the better metric.
7. Experience Reports
--------------------
7.1 Software Validation (SACEM, - GEC, France)
-------------------------------------------------------
A very specialized project which both monitors and
controls the Paris Metro (Subway) system. Their target
was zero defects (in fact, 10^-9 system level error
rate, and 10^-12 component level error rate). Only
20K lines of code (in Modula 2) but their validation
process was interesting:
Had 2 different specs.
Used 2 independent validation teams
Used Formal Inspections
Used Formal Assertions & Proofs (in the safety
critical parts, on inputs/outputs and
invariant loops)
Used a simulator with 350 different scenarios
attempting to break the system
Result: since installing it operationally in Feb. 1988,
they have found only 1 software bug (non
safety-critical), which turned out also to be in the
spec., not in the design.
12th International Conference on Software Engineering - Trip Report
7.2 Graphite Meta Tool at Software Design & Analysis
--------------------------------------------------
A midnight project to develop a simple hierarchical
data management system (HDM) based on the Graphite
Meta Tool (developed at U Mass as part of Arcadia
again). The spec. was written in Graphite's
specification language (GDL) - 52 lines. He reused
226 lines of previous code (in ADA, from a menuing
system), hand-crafted 456 lines of ADA and Graphite
generated the rest - 1774 lines of ADA. The resultant
system worked within 1 week.
A good practical example of prototyping, using a tool
generator. If tools like this were to be tied into the
overall CASE environment - linked to the design and
composition (building) tools, code generation becomes
more of a reality.
7.3 "Use of Formal Inspections at JPL" (Marilyn Bush)
--------------------------------------------------
The second JPL paper by Marilyn Bush. This gives an
excellent existence proof of the value of Inspections
- by the way, ISB courses on Formal Inspections
(Cohen, Huntwork) look to be at least as good as
JPL's, though they have added a regular 2 hour overview
for managers; Lou Cohen has in fact piloted such an
addition here in Tay1 only a couple of weeks ago).
Some statistics from JPL's Inspection records:
They find on average 4 major defects/Inspection
" 12 minor " "
They get through 40 pages in a 2-hr Inspection
They save $25K per Inspection
(They have done some 300 Inspections and cumulatively
believe they have saved at least $7M)
IBM Houston (which someone clained SEI assessed at a
level 5 - best/optimized) is credited with shipping
.01 defects/KLOC per product (which is Space Shuttle
software and the like) and they use Formal Inspections
everywhere.
8. "High Performance Computing & S'ware Eng" (Jeff Squires, DARPA)
----------------------------------------------------------------
A somewhat crazy (he used 4 projectors in parallel!) tour
through all DARPA-intiated supercomputing/parallel computing
efforts:
Connection Machine - current sustained performance @
4-8 gigaflops
Touchstone - (Intel i860-based system). 1992
target of 100-200 gigaflops
iWarp - CMU's systolic-array system for
DSP. Summer 1990 - 1 gigaflop
Nectar - 100 mbit 'gigaswitch' - expected
1991. Fast enough to handle
network page faulting.
12th International Conference on Software Engineering - Trip Report
Operating Systems - Pushing CMU's MACH ("Unix" Kernel)
and TMACH (Trusted MACH - even
smaller kernel).
He did some �processor projections from Intel: by the year
2000, a 1 inch square chip with 50-100 million transistors
@ 250 MH (4 CPU's), yielding 750 MIPs peak and 500 mflops
in either CMOS or BiCMOS (It was amusing to hear of a projected
2 m transistor self-test region of the chip!).
Not many messages for software, except:
Softwawre Engineering must grow up to become Systems
Engineering
Parallelism is here - exploit it!
9. Experience with Formal Methods
-------------------------------
(Formal Methods are basically mathetically [algebraic] based
languages used to reason about the correctness of
specifications and designs).
Panelists were from HP Labs (UK), UNISYS, ETL (Japan), SEI
(Larry Druffel, SEI Director), and Denmark (Dines Bj�rner).
Examples of languages/notations:
Z (used by IBM Hursley [UK] on CICS - 40K
LOC's described in Z)
VDM (Vienna Definition Method) - invented by
Cliff Jones of Manchester Univ.
Logos
Prolog (Japan)
Conclusion: Little penetration yet - niche areas only (mainly
in defense, safety, security); no large scale
project deployment (even the IBM example
represented < 5% of the total CICS system); few
production quality tools yet; not enough basic
training available in it yet. Europe (Universities,
ESPRIT) is putting more effort/emphasis on it than
anywhere else.
For specific areas of a system - eg: secure parts,
fault tolerant parts), it may be worth looking into
this approach, BUT make sure you integrate the
process/method into the rest of your software
devlopment process.
12th International Conference on Software Engineering - Trip Report
10. Panel on Systems Engineering
-----------------------------
A fairly disappointing session that swayed from arguing there's
no such thing as Systems Engineering, to saying that software
engineers are not qualified (inadequate math, physics and EE
background) to be systems engineers, to Martin Thomas'
(Praxis Inc) excellent comment that: those that ignore the
important lessons of the last 25 years of software engineering
should not be in the industry. Squires from DARPA produced a
useful chart showing how we deal with systems engineering
problems:
Unprecendented | Once Understood / "hardest"
| | / /
Requirements | | <- /
| | |
| | V
Precedented | V
|"Easy" <-----------Enabling Technology
-----------------------------------------
Static Dynamic
Environment
The idea being to push the hardest problems (dynamic environ-
ment, brand-new problem space) in the direction of providing
"enabling technologies" for users (systems applications
developers) to solve their domain problems in an "easy" manner,
which in turn allows them to solve harder problems and so on.
Given the non-deterministic nature of many system problems
(Guio from France quoted a subway control problem (using,
incidentally, Petri nets, a popular European state transition
modelling technique) that went wrong), Squires reinforced the
notion of modelling and simulation, letting users try out the
model to see where it fails to meet requirements. Tools do
exist (eg:i-Logix' STATEMATE) to permit this approach.
10. Panel on Technology Transfer
------------------------------
A few valuable ideas from this: SEI's notion of Technology
Advocate and Technology Receptor groups within a life cycle
of technology transfer that looks like this:
Awareness & Tracking (Read papers, books,
attend conferences,
networking)
Screening (What's appropriate for
your domain/business)
Evaluation (Beta test tools/
methods)
Adoption (Acquire process/tools)
Installation (1st. uses, pilot)
Institutionalization (Broad, routine use)
Technology Advocates are educators, marketeers, people who are
excited by technology possibilities
Technology Receptors are change agents who screen, evaluate
and help in the adoption/installation of new
technology
[SETC is both Advocate & Receptor in ISB]
12th International Conference on Software Engineering - Trip Report
Professor Decima of the Univ. of Milan talked about the Cefriel Project
where joint reserach is done by students, faculty, industry
representatives, on the university campus, in an open, heterogeneous
environment [they have to due to sponsors equipment donations!], but
managed using industrial project management norms. This looked like a
valuable approach to technology transfer.
Finally, Kouichi Kishida from Japan (private software house CEO -
Software Research Associates) talked of a "Technology Playground"
at the center of the organization, where "new and wild ideas" are
tried out by a technology group, which both"pushes" and "pulls"
new technology. It reminded me of SETC's proposed (but as yet
unrealized) internship program.
+---------------------------+ TM
| | | | | | | |
| d | i | g | i | t | a | l | I N T E R O F F I C E M E M O R A N D U M
| | | | | | | |
+---------------------------+
TO: John Manzo DATE: 03-APR-1990
FROM: Carlos Borgialli
CC: Tony Hutchings DEPT: DECtp Software Engineering East
Jim Pepe DTN : 227-4302
Dennis Roberson LOCN: TAY1-2
39Q
SUBJ: 12th International Conference on Software Engineering
The 12th International Conference on Software Engineering was held in Nice,
France, March 26-30, 1990. The theme of the conference was "Building a
Foundation for the Future", and consisted of 2 days of tutorials, followed
by numerous presentations on different key issues and working sessions.
Tony Hutchings will be writing a detailed report and I will just share my
views and observations. If you need detailed information, feel free to ask
for it.
The most disappointing thing for me was that there were only two
participants from DEC, Tony Hutchings and myself!!! No one was represented
from the DEC Europe; in fact, Valbone is only 20 or so minutes from Nice.
There was large participation from USA, France, England, Italy and Japan of
course.
The tutorial of Software Reuse: State of the Art and Future Directions was
full. The issue of reusable software is attracting two types of
organizations; those with a lot of code and the need to do something before
maintenance consumed their efforts, and those that would like to start
software projects with long range plans and avoid some of the future
maintenance problems.
There were a lot of representatives from small software houses that are
interested in building "reusable pieces"; they feel that the business will
move toward software warehouses and if you have the right code available
you are going to make a lot of money. The presentation was based on the
experience with the Eureka Software Factory. This project involves several
key European industries and corporations (Matra, Imperial College,
University of Dortmund, British Telecom, Nixdorf, etc.) and will be going
till 1996, a ten year project!!!
They share some experiences but it is still too early to make any major
statements about their findings. They have strong motivations for the
re-use of software, such as, productivity and quality. They are finding a
lot of inhibitors for re-use of software just to mention a few:
- Technical
- lack of a well defined re-used base process
- lack of a component model
- lack of quality measurements
- lack of standards for integration
...
- Non-Technical
- lack of organizational convincing reasons
- lack of economical reasons (???) (I DON'T AGREE WITH THIS STATEMENT)
- lack of legal agreements
- sociological and psychological issues
...
The good news is that they are beginning the definition of some very
interesting rules for the development of re-usable software.
The plenary session on Structure of Distributed Programs was excellent.
Barbare Liskow (MIT) made a superb presentation about distributed systems.
I think she should write DEC's presentations about distributed systems and
transaction processing!!! The good news about her presentation is that we
are doing quite a few things correctly.
Many other technical presentations addressed issues such as safety,
matrix, definitions, design and architecture, prototyping, etc...
Several experience reports provided some ideas of what some corporations
are doing. The most interesting, for me, was the presentation from work
done in Contel and GTEDS about technology transfer and reusable software.
They are a long way from meeting their goals but by 1992 they want to build
software with 50% of reusable code. Even if they do not reach that goal
they are already succeeding in that they are moving in the right direction.
OBSERVATIONS:
In general, I think, there is a realization that the 90's is going to be a
decade of major challenges for software engineering that presently is a
trap between maintenance costs and trying to develop new applications. The
building of reusable software is not going to be cheap; in fact, it is
going to increase the cost for development. Either we think that this is
an investment or we are going to continue looking at short term goals and
the maintenance cost will eat us alive (Just my thoughts!).
The issue of reusable standards should be added to the Phase Review Process
and be a corporation driven activity.
Formal Methods and development tools are almost part of every development
organization, in fact, all of the demos were CASE tools. The more rigorous
the process, the bigger the successes, according to the presentors.
Organizations are being re-shaped in order to address the challenges, such
as in GTEDS or Contel; a new department called Re-usable Software
Development Engineering Group with lots of incentives (money) according to
some metrics.
Also, a participant commented in the latest release of CICS in which IBM
added approximately 40,000 lines of code. They followed Z(?). The field
test encountered only 4 errors and of these four errors 3 were in interfaces
that were not defined during this effort. If this is correct, I think we
have a lot of work to do in order to match their accomplishments.
This conference led me to believe that, regarding quality, we are doing some
things correctly in our organization but we are still far from reaching the
right level. As many others, we are experimenting with the right mixture of
methods and tools.
As usual, the lack of information from DEC makes participants to ask us
what is going on. I answered some of the questions and in others I
referred to the product managers. Some how, I think, people still do not
understand pieces of our strategy. Lots of questions about UNIX, the
future of VMS, the VAX 9000, VAX 3000ft and in my case about TP, confused
about ACMS and DECintact.
In short, it was a good conference for me.
- Carlos
T.R | Title | User | Personal Name | Date | Lines
|
---|