[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference turris::languages

Title:Languages
Notice:Speaking In Tongues
Moderator:TLE::TOKLAS::FELDMAN
Created:Sat Jan 25 1986
Last Modified:Wed May 21 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:394
Total number of notes:2683

126.0. "do you REALLY want an SDI coded in ADA ? ? ?" by OMEGA::CLARK () Thu Jan 22 1987 12:29

   This is not a note about SDI in particular.  This is a note about language
   reliability in general.  For the sake of argument, I am assuming that
   informed people (does not necessarily equal many of those in government)
   have decided to go ahead with star wars development.  Now we ask ourselves,
   what kind of language do we want for this development project?  That is,
   what kind of language is least likely to permit the deployment of software
   with bugs?  Because SDI is a matter of life/death, not just income/expense
   or convenience, we want, in some sense, the "safest" possible language.
   But, demanding a "safe" language is not very helpful until we are very
   clear about what language "safety" entails.

   I want to distinguish two classes of bugs in deployed software.  The
   distinction is based on whether a code module "has a consistent interpre-
   tation with respect to a language standard".  We are considering here only
   applications with a single-thread of activity, and with no asynchronous
   operations.  The notion of consistent interpretation is defined as follows:

	a module X (in the kind of application specified above)
	has a consistent interpretation with respect to language
	standard Y

		IFF

	(for all possible input states (a)) (for all possible
	legal compilers(b1)) (for all possible legal compilers
	(b2 not equal b1)) the externally visible effects of the
	module X given state (a) and compiler (b1) equal the
	externally visible effects of the module X given state
	(a) and compiler (b2).

   In the above definition, a "legal compiler" is one which meets the language
   standard Y.  Note that a module X may have a consistent interpretation with
   respect to language (standard) Y and still have a bug.  If the bug causes
   the module to crash (for example, divide by zero), this does not change the
   property of having a consistent interpretation.  (That is, so long as the
   module results in a divide-by-zero fault no matter what compiler is applied
   to the module.)  Consistency of interpretation means only that the exter-
   nally visible effects of a module are the same no matter what compiler is
   used.  If the external effects always are "partial" (because of a crash),
   they still are consistent if the partial effects always are the same.

   I assume that we are dealing with a software system in which all modules
   have been processed by legal compilers and in which no module has errors
   (as reported by the compiler(s)).  Note:  I am not assuming that a "legal"
   compiler reports all violations of a standard.  I am assuming only that a
   legal compiler meets whatever requirements are expressed by those who formu-
   late the standard.   Now, in the kind of software system we are concerned
   with, there are two kinds of bugs:


	application bugs	the effects of module X are consistent
				across all conceivable legal compilers,
				but (some of) these effects are "wrong"

	language bugs		the effects of module X are not consis-
				tent from one legal compiler to the next


   In the latter category, we may have a situation where module X has the
   "right" external effects when handled by some compilers but crashes (an
   effect that is neither right nor wrong) when handled by other compilers.

   But, some of you may ask, how can there be language bugs in a legal com-
   piler (if the standard and the validation methods are as strict as they
   are, for example, with ADA)?  One common loophole is introduced in lan-
   guages that permit arbitrary order of evaluation in complex expression
   trees.  To the best of my knowledge, all current languages that have this
   feature have a related set of language bugs.  None of them has the extra
   syntax or semantics needed to make it possible to check for order depen-
   dencies.

   Order dependencies appear in a number of ways:

	(a)  One part of an expression has a routine or function
	     call with side-effects, and those side-effects can
	     contribute to the rest of the expression in an incon-
	     sistent manner.

	(b)  One part of an expression has a "run-time type check"
	     that is intended by the programmer to "guard" the
	     evaluation in another part of the expression, and
	     treatment of the "guardian-guardee" relation may not
	     be consistent.

   All (recent) language standards that permit arbitrary order of evaluation
   also include statements that programmers shall not write code with order
   dependencies.

   There is a tendency amongst compiler writers and standards developers to
   treat language bugs as just another kind of application bug:  "Hey, tough
   luck, you violated the standard."  Closing the loophole would require a
   significant overhaul of most languages and perhaps a good deal of overhead
   in compilers for these languages.  In general, I have some sympathy for
   this attitude, as it is not clear that the benefits (of closing the loop-
   holes) outweigh the costs.

   But, what about software that is to be deployed in something like SDI?
   It's all very nice to say that every ADA compiler has been validated on
   an extensive suite of programs.  But, I bet that none of those programs
   have order dependencies.  As far as I know, an ADA compiler is not required
   to report order dependencies.  So a compiler can be legal (equals validated)
   and not tell a user about language bugs that have been exercized in module
   X.  This means that, with the ADA standard as it stands now, there is a
   huge loophole through which software bugs may creep.  The only constraint
   on the number of such bugs is humans who (with some level of fallibility)
   are supposed to remember that order dependencies are "naughty".

   What if an order dependency (debugged and thoroughly "tested" umpteen times
   with compiler version v1.2 -- but still latent) becomes non-latent in
   compiler version v1.3?  What if a new deployment of software using the
   "latest and best" compiler occurs a few days before some unusual "event"
   requires the use of the module with the dependency?  What if the resulting
   module crash leads to the death of a billion human beings ???

   If any lawyers survive the "accident", I'd love to see the arguments about
   who is liable.  There is a well-established tradition of suing a tool-maker
   for every last penny, if the tool-maker knows about a possible accident
   sequence (one that is exercized many times a year in less horrible ways)
   and does nothing about it.

   If you're a plain ordinary citizen whose tax dollars are supporting the
   use of ADA as it stands, I would say you should run to your nearest con-
   gressman/woman and ask for your money back.  If you're actually writing an
   ADA compiler, I'd like to know if you still think the schedule/budget does
   not have room for detecting order dependencies.  This kind of defect exists
   in other languages, true, but the military is not using other languages for
   its weapons projects.


   Paul Clark

T.RTitleUserPersonal
Name
DateLines
126.1QUARK::LIONELThree rights make a leftThu Jan 22 1987 15:0820
    Nobody I know of even tries to pretend that the Ada validation suite
    ensures a "bug free" compiler.  The validation suite only tries
    to see if the implementation conforms to the written standard as
    far as what features must be supported and how.
    
    Also, I feel that anyone who believes that Ada is any "safer" just
    because of the validation suite is living in a fantasy land.
    
    Nevertheless, I believe that the relative "safety" of Ada lies not
    in the supposed lack of bugs in the implementation (and as an Ada
    developer, I can assure you that all implementations have bugs),
    but that it is easier to develop correct software in Ada than in
    most other languages.  By correct I mean that the program does what
    the designer intended, ignoring bugs in the compiler/RTL/OS.  And
    I agree with this philosophy.
    
    However, I wouldn't dream of suggesting that a 100% correct SDI
    implementation is possible, no matter what language is used.  This
    should not stop us from using Ada for all its advantages.
    					Steve
126.2Nothing new under the sunNOBUGS::AMARTINAlan H. MartinThu Jan 22 1987 15:5019
Re .0:

Detecting arbitrary programs with aliasing bugs or unstable computations
in a mechanical fashion is no easier than detecting programs that may
contain an infinite loop - it is computationally undecidable.  This
is true for most languages (almost all of the popular ones, I'm sure).

You should no more expect a compiler for any sufficiently general language
to contain a complete and correct "order dependency finder" than you
should expect it to contain an "infinite loop detector".  (Either are
as likely as discovering that this year's Fords are powered by a perpetual
motion machine instead of a gasoline powered internal combustion engine).

On the other hand, a program can be shown to be free of order dependencies
by a proof, just as it can be proved to halt.  Whether this is actually
done for a program depends on whether it has a sufficiently important job
to do (for instance, one which affects human lives), and what the cost
of generating and maintaining the proofs is.
				/AHM
126.3CHOVAX::YOUNGBack from the Shadows Again,Thu Jan 22 1987 22:5613
    Re .2:
    
    Hmmm, I have never heard of the impossiblitiy of detecting aliasing
    bugs or unstable computations deterministicaly, Alan.  Could you
    point me to some references on this?
    
    Also, as I have stated before in this conference,  and in the MATH
    conference the halting problem is only known to be impossible for
    Turing-Machine equivilant languages.  There are in fact useful
    languages for which it IS possible to write an infinite loop detection
    program.
    
    --  Barry
126.4SMOP::GLOSSOPKent GlossopFri Jan 23 1987 01:0174
    Please note that order of evaluation is only one of the many things that
    can change in a compiler from release to release that could potentially
    make a "working" program that violates a language standard fail.  Other
    examples include (but are certainly not limited to):

	- The allocation of variables.  This includes allocating an
	  uninitialized variable to a location that happens to contain
	  a different initial value, generating code that leaves a
	  different initial value in the location, and causing addressing
	  to fail because the address of the allocation does not meet
	  previous, but not specified, criteria.  (For example, the
	  variable is used in a hardware instruction that requires
	  quadword alignment, but the alignment is not specified in
	  the source program.)

	- The behavior of unspecified language features.  For example, a
	  language that is not required to do bounds checking might allow
	  out-of-bounds intermediate results without overflowing in one
	  release but not in another.  (Consider PL/I.  PL/I allows more
	  precision to be used by the implementation than what was specified.
	  If a programmer declares a number to be FIXED BINARY(3) - i.e.
	  3 binary digits of precision - the implementation is free to
	  use a byte to represent the value.)

    Note that even the order of evaluation is much more pervasive that you
    might expect.  You would have to specify that all parameters to procedures
    be evaluated in a given order, you would have to specify that all
    expressions containing arithmetic operators that could potentially
    overflow be evaluated in a specific order.

    For example, consider the "simple" expression A + B + C, where all of
    the variables are in the range minint..maxint, and the result of the
    expression must be in that range as well.  Given values maxint/2,
    maxint/2 and -2, the expression will cause an overflow if the code
    is generated as (A+B)+C, but not if it is generated as A+(B+C).

    I'll also point out that as the complexity of the language increases,
    the ability to do error detection at compile time can drop dramatically.
    Languages that allow essentially unrestricted aliasing (C comes to
    mind) allow the programmer to play all sorts of dirty tricks in a
    program that is "standard conforming".  Even relatively innocuous
    things can present a problem.  For example, PL/I allows arrays to
    have run-time extents.  This makes it impossible for the compiler
    to even make educated guesses about whether or not a particular
    element of an array is initialize.  (For languages that only allow
    compile-time extents, that don't allow variables to have their
    address taken, and than have other simplifying restrictions, the
    compiler can at least provide a close approximation to detecting
    some classes of uninitialized variables.)

    There are far more examples that could be given of things that fall
    into this category of problems (not to mention plain old semantic
    errors in the source program).  As was pointed out before, it simply
    is not possible to determine all errors of this type at compile time
    except for certain VERY restricted classes of languages.  It really
    seems to me that this is overkill in any case.  Languages like C
    are popular to a certain degree just because they DON'T have the
    certain classes of restrictions.

    It seems to me that effort would be much better expended in trying to
    determine the best way of minimizing semantic errors in program source
    code, since those errors are several orders of magnitude more frequent.
    (You would probably make the code compiled by compilers more reliable
    by coming up with methods to improve program reliability and applying
    those to the compiler than by spending effort coming up with means of
    detecting and correcting the version to version and implementation to
    implementation compiler differences.  Compiler implementors do have the
    option of not changing the generated code from release to release.  In
    fact, DEC has been somewhat unusual in that the compilers do generally
    improve from release to release.  If you used something like IBM OS PL/I,
    you'd never have to worry about the generated code changing.  On the
    other hand, you'd never have to "worry" about your code going faster
    without you doing anything or getting support for nice new tools like
    LSE and SCA... ;-)  The price of progress...)
126.5SMOP::GLOSSOPKent GlossopFri Jan 23 1987 01:074
    One last note.  There IS one class of programming languages that aren't
    normally very sensitive to the order of evaluation, since it is typically
    explicitly controlled - assemblers.  Of course it does have a few other
    minor drawbacks when you're trying to write reliable code...  ;-)
126.6Proof by example; utility; unstable CTCEsNOBUGS::AMARTINAlan H. MartinFri Jan 23 1987 08:5241
Re .3:

The assertion seemed obvious.  For instance:

	PROGRAM ALIAS
	INTEGER I,A(2)
	COMMON A
	DATA A/0,0/
	ACCEPT *,I
	CALL SET(A(I))
	IF (A(2).EQ.0) THEN
		TYPE *,'Zero'
	ELSE
		TYPE *,'Nonzero'
	END IF
	END

	SUBROUTINE SET(X)
	INTEGER X,Y(2)
	COMMON Y
	X = 1
	Y(1) = 1
	END

Does the above program contain an illegal alias between the formal X and
the variable A(1) (Y(1)) in common?


Note that I was careful to avoid stating that all languages contained
programs which were unprovably incorrect.

How useful are those "useful" are those languages which loops can be
detected in?  Has anyone ever sold a program for profit written in one of
them, for instance?


Re .5:

I suspect that most useful assemblers are subject to unstable arithmetic
problems in compile time constant expressions.
				/AHM/THX
126.7More Proof by ExampleTLE::RMEYERSRandy MeyersFri Jan 23 1987 16:1813
Re .3 and .6:

It is true that there are languages for which it is possible to solve
the halting problem.  Those languages fall into to classes.  Either
those languages are less powerful than a Turing Machine or they can
be proved not to be run on a computer (that they can not the transformed
in to the notation of a Turning Machine).

The languages of the second class are fairly useless.  The languages
of the first class, although limited, have their applications, and people
do seem to make money selling programs in them.  For example, look at
any of the books that give prepared spreadsheet models for common
business applications.
126.8End DigressionTLE::RMEYERSRandy MeyersFri Jan 23 1987 16:202
And back to the subject, do you really want SDI software written in
Lotus 1-2-3?
126.9PSW::WINALSKIPaul S. WinalskiSat Jan 24 1987 13:287
I think that too many people are hiding behind questions like, "Do you REALLY
want an SDI coded in ADA???"

Why don't you come out and say what you really mean:  "Do you really want an
SDI coded, PERIOD???"

--PSW
126.10Two in oneTHE780::PEIRCEMichael Peirce, Santa Clara FACSat Jan 24 1987 16:0411
    A agree with .9 there are two questions here wrapped in one:
    
    (1) Should we build the type of systems called for in the SDI with
        the state of the art in software engineering.  i.e. should we
        bet our defense/freedom/lives on it working as expected?

    (2) Given that we are building some the SDI type systems, what
    	language should they be coded in, be that Ada, Fortran, C,
    	Prolog, whatever?
    
    
126.11QUARK::LIONELThree rights make a leftSat Jan 24 1987 21:2915
    Re: .10, point 2
    
    Given that some SDI software will be built anyway, I'd prefer that
    
    	A) It be built in the most appropriate language, and Ada is
    	   perfectly appropriate for much of it
    
    	B) It should be built using DEC products wherever appropriate.
    	   If it's going to be built, they may as well use the best
    	   basis for it.  This is why I don't lose sleep over my
    	   work on VAXELN Ada - I'll just make it the best damned
    	   Ada product for embedded systems there is and urge
    	   everyone to use it.
    
    				Steve
126.12There are other relevant realmsMODEL::YARBROUGHMon Jan 26 1987 12:1122
Whether we are discussing SDI specifically is irrelevant. For the most 
part, the same issues pervade if the universe of discourse is Air Traffic
Control systems, or any other space in which human lives are at stake.
Since, I believe, we agree that *perfect* implementations of such software
are not possible (although such a lofty goal MUST be kept before us), the
questions boil down to what are acceptable costs for achieving a reasonable
probability of success. 

What we do not have at present, I think, is enough data to determine what
the probabilities of success of very large systems are, and how much it
costs to change those figures significantly. The best data we have is 
qualitative, not quantitative: we know that certain methods are better than 
others, but we can't say how much yet. The best data I have is that yes, 
ADA is better for doing this sort of thing than any other relevant 
language, but how much it costs to make a reasonable SDI/ATC/etc. 
implementation is beyond knowing now.

(Flame on)
The tragedy of recent strategic weapons systems is that their cost may have
been almost entirely wasted, as they have never been, and are likely never
to be, used. 
(Flame off)
126.13SDI must be lower risk than ATC, etc., right?NOBUGS::AMARTINAlan H. MartinMon Jan 26 1987 13:316
Re .11:

I find it ironic that Digital has restrictions on the sale of systems for
direct control of nuclear reactors and air traffic control, yet doesn't
seem to have similar rules in a domain where a mistake could cause WW III.
				/AHM
126.14Comments on a light note...MLOKAI::MACKEmbrace No ContradictionsMon Jan 26 1987 14:0918
Re .12:
    
>    The tragedy of recent strategic weapons systems is that their cost may
>    have been almost entirely wasted, as they have never been, and are
>    likely never to be, used. 

    Gee, as I understood it, the goal was that they not be used, and the
    justification of the cost was that, if we spend enough on them, we can
    build one fancy enough that they never will be used. 
    
Re .13:  
    
    Yes, but then presumably, after a reactor blows somebody can sue
    you.  You don't have the same problem if everything blows up. :-)
    
    						Strange mood today,
    
    						   Ralph
126.15Ada(R) is a Program, not a Language\TLE::BRETTMon Jan 26 1987 14:438
    The other misunderstanding being continued by .0 is that the Ada
    language, per se, is the main thrust of the Ada Programming effort.
    It is not.  The effort is a wholesale overhaul of all aspects of
    s/w engineering, including (but not limited to) editting environments
    (all the way from design-editting to test-creation), file systems,
    debugging tools, security issues...
          
    /Bevin
126.16The Ada EnvironmentTLE::RMEYERSRandy MeyersMon Jan 26 1987 18:4423
Re .15:

The mistake is a natural one.  The Reference Manual does define Ada as
a programming language, not as a software engineering environment with
a programming language embedded within it.  In current usage, the word
Ada, I believe, is still considered to be the language, not the 
environment.  One hears the phrases "Ada" (for the language) and "Ada
environments" more often the the phrases "Ada language" and "Ada" (for
the environment).

I think that we can both agree that when the DoD says that they want
something written in Ada, they also presume that an Ada environment
will be used as well.  Ada brings with it a way of life...

Back to the main topic:

If SDI is built, I would want them to use Ada (the language) for the
software, so that there is a better chance that the software works.
On top of that, I would prefer that an Ada environment be used so that
the chances are even better that the software works and so that a few
billion dollars be saved developing the software.

The above is an endorsement of Ada, not SDI.
126.17issues are consistency and maintenanceOMEGA::CLARKTue Jan 27 1987 11:5030
   Okay, as far as programming environments go, the ADA environment is the
   hands down winner (in terms of maximizing the chances of producing a bug-
   free system).  The question I was trying to raise concerned the language
   itself.  In many respects, the language itself also is better as a tool
   for engineering reliable systems (compared with other popular languages).
   But, there is one dimension of reliability where it falls short:

	With some languages, if you've tested module X, and you are
	confident that it works as compiled by version V1.5 of a compiler,
	you can be very sure that it will work as compiled by version V1.6
	of the compiler.  With ADA, and similar languages, where each com-
	piler is free to treat order dependencies (and related software
	abnormalities) in a different way, you cannot be quite as certain
	that a module will continue to be "known good" going from one
	release of a compiler to the next.

   Since ADA does make forward progress in terms of reliability in so many
   other dimensions, why should it move backwards on this dimension of consis-
   tency from one release to the next?  If the cost of fixing order of eval
   is only a few machine cycles here and there, why shouldn't ADA do this,
   in the interest of boosting reliability?

   Are we going to put a clause in each software sale that says "if you have
   used previous releases of this compiler, we recommend that before deploy-
   ment of applications based on the new release, you retest all components
   generated by this compiler?"

   pac

126.18I've seen them alreadyTLE::REAGANJohn R. ReaganTue Jan 27 1987 12:404
    The clause you mention is already in most if not all contracts for
    government software.  (At least the ones I've had dealings with...)
    
			-John    
126.19Star Wars computing presentation in ChelmsfordSMURF::JMARTINDDT: the sonic screwdriver of TOPSTue Apr 28 1987 09:3358
Free admission                               Wednesday, April 29, 7:30pm
Question and Answer Session                  Old Town Hall at the center

           RELIABILITY AND RISK:  COMPUTERS AND NUCLEAR WAR
                 A 34-minute slide/tape presentation

Reliablity and Risk...
   ...investigates whether computer errors in key military systems--some of
   them unpreventable errors--could trigger an inadvertant nuclear war.

   ...features technical, political, and military experts discussing the role
   of computers at the heart of civilian and military systems, from the space
   shuttle to nuclear weapons to Star Wars;

   ...describes the ways in which all large, complex computer systems make
   mistakes--often unexpected and unpreventable mistakes:
              o The 46-cent computer chip failure that led to a high-priority
                military alert.
              o The software error that led to the destruction of the first
                Venus probe.
              o The design flaw that caused a missile early-warning computer to
                mistake the rising moon for a fleet of Soviet missiles.

   ...explores the growing reliance on computerized decision making and how a
   computer error could trigger a disaster, especially in a time of crisis.

   ...explains why we should not rely exclusively on computers to make
   critical, life-and-death decisions.

   ...uses straightforward language and graphics and is recommended for all
   audiences.  No technical knowledge is required.

   ...received a Gold Award in the Association for Multi Image New England
   competition in November, 1986--the largest multi-image competition in the
   country.

Speakers in Reliability and Risk include:

o Lt. General James A. Abrahamson, Director, Strategic Defense Initiative
  Organization (SDIO)
o Lt. Col. Robert Bowman, Ph.D., US Air Force (retired), Former Director,
  Advanced Space Programs Development
o Dr. Robert S. Cooper, Former Director, Defense Advanced Research Projects
  Agency (DARPA)
o Dr. Arthur Macy Cox, Advisor to President Carter, SALT II Negotiations, and
  Director, American Committee on U.S.-Soviet Relations
o Admiral Noel Gaylor (retired), former Commander-in-Chief of the Pacific Fleet
o Dr. James Ionson, Director, SDIO Office of Innovative Science and Technology
o Severo Ornstein, Computer Scientist (retired) and Founder, Computer
  Professionals for Social Responsibility
o Professor David Parnas, Computer Scientist, Resigned from SDIO Panel on
  Computing in Support of Battle Management
o Dr. John Pike, Associate Director, Federation of American Scientists
o Dr. William Ury, Director, Harvard Nuclear Negotiation Project
o Actress Lee Grant as narrator
and many others

Reliability and Risk was produced by Interlock Media Associates and CPSR/Boston