[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference 7.286::digital

Title:The Digital way of working
Moderator:QUARK::LIONELON
Created:Fri Feb 14 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:5321
Total number of notes:139771

245.0. "Engineering Ethics" by MAY20::MINOW (Martin Minow, MSD A/D, THUNDR::MINOW) Wed Jan 07 1987 09:18

This was in Risks Digest, and is worth wider distribution:

RISKS-LIST: RISKS-FORUM Digest, Tuesday, 6 January 1987  Volume 4 : Issue 36
 
           FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS 
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

...
 
To: [email protected]
Subject: Engineering Ethics
Date: Fri, 02 Jan 87 11:47:56 -0500
From: Chuck Youman <[email protected]>
 
The December 28 op-ed section of the Washington Post included an article
titled "The Slippery Ethics of Engineering" written by Taft H. Broome, Jr.
He is director of the Large Space Structures Institute at Howard University
and chairman of the ethics committee of the American Association of
Engineering Societies.  The article is too long to include in its entirety.
Some excerpts from the article follow:
 
 Until now, engineers would have been judged wicked or demented if they
 were discovered blantantly ignoring the philosopher Cicero's 2,000-year-old
 imperative:  In whatever you build, "the safety of the public shall be 
 the highest law."
 
 Today, however, the Ford Pinto, Three-Mile Island, Bhopal, the Challenger,
 Chernobyl and other technological horror stories tell of a cancer growing
 on our values.  These engineering disasters are the results of willful
 actions.  Yet these actions are generally not seen by engineers as being
 morally wrong. . . Some engineers now espouse a morality that explicitly
 rejects the notion that they have as their prime responsibility the
 maintenance of public safety.
 
 Debate on this issue rages in the open literature, in the courts, at public
 meetings and in private conversations. . . This debate is largely over four
 moral codes--Cicero's placement of the public welfare as of paramount
 importance, and three rival points of view.
 
 Significantly, the most defensible moral position in opposition to Cicero
 is based on revolutionary ideas about what engineering is.  It assumes that
 engineering is always an experiment involving the public as human subjects.
 This new view suggests that engineering always oversteps the limits of
 science.  Decisions are always made with insufficient scientific information.
 
 In this view, risks taken by people who depend on engineers are not merely
 the risks over some error of scientific principle.  More important and
 inevitable is the risk that the engineer, confronted with a totally novel
 technological problem, will incorrectly intuit which precedent that worked
 in the past can be successfully applied at this time.
 
 Most of the codes of ethics adopted by engineering professional societies
 agree with Cicero that "the engineer shall hold paramount the health,
 safety and welfare of the public in the performance of his professional 
 duties."
 
 But undermining it is the conviction of virtually every engineer that totally
 risk-free engineering can never be achieved.  So the health and welfare of
 the public can never be completely assured.  This gets to be a real problem
 when lawyers start representing victims of technological accidents.  They
 tend to say that if an accident of any kind occurred, then Cicero's code
 demanding that public safety come first was, by definition, defiled, despite
 the fact that such perfection is impossible in engineering.
 
 A noteworthy exception to engineer's reverence for Cicero's code is that of
 the Institute of Electrical and Electronics Engineers (IEEE)--the largest
 of the engineering professional societies.  Their code includes Cicero's,
 but it adds three other imperitives opposing him--without giving a way to
 resolve conflicts between these four paths.
 
 The first imperative challenging the public-safety-first approach is called
 the "contractarian" code.  Its advocates point that contracts actually exist
 on paper between engineers and their employers or clients.  They deny that
 any such contract exists--implied or explicit--between them and the public.
 They argue that notions of "social" contracts are abstract, arbitrary and
 absent of authority.
 
 [The second imperative is called] the "personal-judgment" imperative.  Its
 advocates hold that in a free society such as ours, the interests of business
 and government are always compatible with, or do not conflict with, the 
 interests of the public.  There is only the illusion of such conflicts. . .
 owing to the egoistic efforts of:
 
 -Self-interest groups (e.g. environmentalists, recreationalists);
 
 -The few business or government persons who act unlawfully in their own
  interests without the knowledge and consent of business and government; and
 
 -Reactionaries impassioned by the loss of loved ones or property due to
  business-related accidents.
 
 The third rival to public-safety-first morality is the one that follows 
 from the new ideas about the fundamental nature of engineering.  And they
 are lethal to Cicero's moral agenda and its two other competitors.
 
 Science consists of theories for claiming knowledge about the physical world.
 Applied science consists of theories for adapting this knowledge to individual
 practical problems.  Engineering, however, consists of theories for changing
 the physical world before all relevant scientific facts are in.
 
 Some call it sophisticated guesswork.  Engineers would honor it with a
 capitalization and formally call it "Intuition." . . . It is grounded in
 the practical work of millenia, discovering which bridges continue to stand,
 and which buildings.  They find it so compelling that they rally around its
 complex principles, and totally rely on it to give them confidence about what
 they can achieve.
 
 This practice of using Intuition leads to the conclusion put forward by
 Mike Martin and Roland Schinzinger in their 1983 book "Ethics in Engineering":
 that engineering is an experiment involving the public as human subjects.
 
 This is not a metaphor for engineering.  It is a definition for engineering.
 
 Martin and Schinzinger use it to conclude that moral relationships between
 engineers and the public should be of the informed-consent variety enjoyed
 by some physicians and their patients.  In this moral model, engineers would
 acknowledge to their customers that they do not know everything.  They would
 give the public their best estimate of the benefits of their proposed 
 projects, and the dangers.  And if the public agreed, and the engineers 
 performed honorably and without malpractice, even if they failed, the public
 would not hold them at fault.
 
 However, most engineers regard the public as insufficiently informed about
 engineering Intuition--and lacking the will to become so informed--to assume
 responsibility for technology in partnership with engineers (or anyone else).
 They are content to let the public continue to delude itself into thinking
 that engineering is an exact science, or loyal to the principles of the
 conventional sciences (i.e., physics, chemistry).
 
Charles Youman (youman@mitre)
 
------

I believe you can subscribe to risks by writing to DECSRC::RISKS-REQUEST

T.RTitleUserPersonal
Name
DateLines
245.1Some criticismGOBLIN::MCVAYPete McVay, VRO (Telecomm)Wed Jan 07 1987 12:1513
    The author gives the impression by quoting Circero that engineers have
    had an unbroken record of responsible ethical behavior until now.
    During the early years of the industrial revolution, it took public
    outrage and a lot of politics to correct dangerous machinery conditions
    in factories and trains that tended to derail and/or explode.  If
    anything, conditions have improved.  Regulating societies such as the
    IEEE would be unthinkable a century ago.

    This is not to downplay the problem of engineering ethics and
    responsibility, but I favor less moral decrieing and more "awareness
    training".  I personally favor moving engineering failure lawsuits
    from civil to criminal courts, with individuals held responsible
    rather than (or in addition to) anonymous corporations.
245.2NY1MM::SWEENEYPat SweeneyWed Jan 07 1987 16:331
    How is the IEEE a "regulating organization"?
245.3SARAH::TODDWed Jan 07 1987 17:4834
    I would question whether Cicero's dictum was followed any more in
    his time than in our own:  absolutes are NEVER followed in practice,
    because once you subscribe fully you find that you can do nothing.
    
    Everything entails some risk.  Most of the time, the risk is outweighed
    by some benefit - being able to build something useful at feasible
    cost being perhaps the most obvious.
    
    Seldom can the risk REALLY be evaluated, however, especially in
    a complex and compartmented society like our own where the creator
    of something has little or no control over (or even knowledge of)
    its eventual use.
    
    And, of course, designers vary all over the map in competence -
    and society would quickly grind to a halt if only the best were
    allowed to practice their trade, since there would be far too few
    to go around.
    
    Certainly not ALL of the examples at the beginning of .0 fit any
    reasonable definition of GROSS incompetence or negligence on the
    part of the design engineer.  Conceivably none do.  Most become
    problems due to improper implementation and/or use - something the
    engineer has no control over, and yet MUST depend upon to some
    extent in the design if anything is to be built.
    
    In sum, without denying that engineering abuses exist, I seriously
    question whether it constitutes a weak link in the chain of
    creation.  By and large, I believe that society gets what it
    deserves (and quite likely what it wants) in its products, and
    that on the average OVER-engineering is the only thing that
    protects it from even more disasters.
    
    Not complacent, but not guilty either,	- Bill
    
245.4Should this be in SOAPBOX??GHANI::KEMERERSr. Sys. Sfw. Spec.(8,16,32,36 bits)Wed Jan 07 1987 18:0810
    Re: .0
    
    Shouldn't this entry be in SOAPBOX instead of here? Aside from the
    fact that there are engineers, etc. at DEC I don't see where this
    entry had much to do with THIS conference.
    
    Am I off base on this??
    
    						Warren
    
245.5It is in SOAPBOXTSE::LEFEBVREHit me, I&#039;m openThu Jan 08 1987 12:133
    re. all, it is now in SOAPBOX
    
    	Mark.
245.6RelevanceMAY20::MINOWMartin Minow, MSD A/D, THUNDR::MINOWThu Jan 08 1987 12:5815
I didn't post it in Soapbox because I believe it is relevant to our work
at Digital.  We build things that our customers use for life-critical
functions.   At what point is our responsibility to deliver "on time,
under budget" products outweighed by our responsibilty to deliver
error-free products.  Would the tradeoffs we make be different if we
were licensed professional engineers (and project leaders were
"board-certified"), instead of craftsmen with essentially no formal
training in safety and professional ethics. 

I'm not pointing fingers -- or suggesting we're doing anything wrong --
but want to get people thinking.  Somehow, I feel this is a better forum
for that than Soapbox. 

Martin.

245.7GOBLIN::MCVAYPete McVay, VRO (Telecomm)Thu Jan 08 1987 15:4132
>   How is the IEEE a "regulating organization"?

    Guilty of FuzzThink: I was referring to the various standards
    committees set up under IEE/ACM/ANSI auspices.  Of course, that's
    different than the function of the societies themselves...
    
    re: .6 (Soapbox)
    
    I agree with Martin: this is a serious subject that calls for serious
    thought, not [soapbox] flames.  And it is, in my opinion, a key
    issue for us engineers.  When are we responsible?  When are we not
    responsible?
    
    BTW, the idea that we are not responsible if the product is misused
    isn't acceptable, and there are plenty of court cases to prove it.
    For example, it's not enough to simply say, "don't misuse this"
    on a product.  A court case over a century ago� established that
    you must also tell the consequences of misuse: to wit, "Don't drink
    this stuff: it's poison and could kill you."  In another misuse
    case, a teacher in New York state told his students not to touch
    some chemicals on a shelf, because they were dangerous.  The teacher
    left the room for a few minutes, during which time a student spilled
    the offending chemical, which turned out to be acid.  The teacher
    was successfully sued for negligence, even though it was established
    that he had been clear and explicit in his instructions.
    
    'Nuff sermonizing.  When am I responsible for what I develop?
    
    ------
    �The details escape me: I was led to believe by a lawyer friend
    that this case is so famous in tort law that almost any lawyer can
    cite it off the top of his/her head.
245.8180 degree phase shift re: SOAPBOXGHANI::KEMERERSr. Sys. Sfw. Spec.(8,16,32,36 bits)Fri Jan 09 1987 03:0012
    Re: .6 (Soapbox), etc.
    
    Ok, I give. When put the way it was in .6 I guess it does belong
    here. 
    
    Either way, it's nice to know there are those life forms (human
    beings) out there that care about such things.
    
    May your future(s) exceed your wildest imaginations.
    
    						Warren
    
245.9my opinionSAUTER::SAUTERJohn SauterFri Jan 09 1987 09:2427
    I don't know any obvious place to draw the line between a poorly
    engineered product and misuse of that product.  However, here is
    my opinion.  As stated earlier, the courts have established that
    your product must not only warn against misuse, it must also describe
    the consequences of misuse.  I would go further: I claim that your
    product must, in addition, make misuse difficult.
    
    For example, if you build a lawn mower, you must tell the operator
    not to stick his hand in the blade area while it is running, you
    must tell him what will happen if he does, and you must provide
    a protector for the blade area which will stop the blades if it
    is removed.  Going one stage further, you must provide a "dead-man's"
    switch on a self-propelled lawn mower, so if the operator lets go
    of the handle the mower will stop.  Finally, you must arrange the
    switch so that stopping the mower does not stop the engine, because
    restarting the engine is enough of a bother that the operator will
    defeat the switch, and thus lose the protection.  With a safety
    switch that stopped the engine, it would be a close call to consider
    defeating the switch misuse.
    
    A contributory cause of the recent Amtrak/Conrail crash involved
    a warning whistle that had been disabled.  I wonder if the whistle
    was so badly designed that disabling it will not be considered
    misuse?
    
    This is just my opinion, of course.  How do others feel?
        John Sauter
245.10"consumer-protected" nuisance!CADSYS::RICHARDSONFri Jan 09 1987 13:3812
    My neighbor's new self-propelled lawn mower shuts its engine off
    if you let go of the handle for an instant (say, to toss a rock
    out of its path).  This is a REAL NUISANCE to him (and to me, when
    my old self-propelled mower, which doesn't have this "feature", is
    borken) since you cannot start the mower on his steep yard and have
    to wheel it down into the street to restart it, and then maneuver
    it back to where you left off.  Granted it is a cheap mower; the
    expensive ones work as you say. 
    
    Just an aside to say that "consumer protection" features need not
    be done in any sensible or useful way.  I prefer my mower anyhow;
    it doesn't propel itself if I let go of it for a second anyways.
245.11COVERT::COVERTJohn CovertFri Jan 09 1987 15:0617
>    Going one stage further, you must provide a "dead-man's" switch on a
>    self-propelled lawn mower, so if the operator lets go of the handle
>    the mower will stop.  Finally, you must arrange the switch so that
>    stopping the mower does not stop the engine, because restarting the
>    engine is enough of a bother that the operator will defeat the switch,
>    and thus lose the protection.  With a safety switch that stopped the
>    engine, it would be a close call to consider defeating the switch misuse.
    
It's clear you haven't recently bought a mower.

All mowers, not just self-propelled ones, must now have a dead man's switch
which must be held while starting the engine (the cord now extends back to
the handle) and which stops the engine QUICKLY if you release it.

I have seriously considered defeating it with a couple of tie-wraps.

/john
245.12>Long Live Consumer ProtectionismREGENT::MERRILLIf you&#039;ve got it, font it.Fri Jan 09 1987 16:4414
    I recently used friends Snow Blower: it has TWO "dead-man" interlocks
    with one for the clutch (which therefore works the OPPOSITE of mine!)
    and one for the snow auger & blower.  If you release the "clutch"
    handle motion stops; IF the auger is engaged AND you take BOTH hands
    off the handles, THEN the engine dies! 
    
    That is one SAFE system, as long as there is only one person running
    the machine.  Next thing you know they'll require a continuity check
    so that you have to hold it with both hands or if someone is helping
    you you have to hold their hand .... but of course you have to hold
    it with your bare hand ... but this is a snowblower!           
     
    :-)		RMM
    
245.13Back to the topic?STORM::MINOWMartin Minow, MSD A/D, THUNDR::MINOWFri Jan 09 1987 19:373
C'mon guys, this notesfile discusses working at DIGITAL, and this note
discusses Engineering ethics.  You want to discuss lawn mowers, please do
so in Soapbox.
245.14ok, tying it back to DigitalBEING::MCCULLEYRSX ProFri Jan 09 1987 20:1768
    ok, let's bring it directly back to our own business:  how much
    are we *really* doing to consider the consequences of our designs
    to the public?  is it enough?  or too much?
    
    Digital's business is as a supplier of base system components. 
    Our systems are often used by others as part of their products.
    How much responsibility do we have to consider their customers as
    part of our design "public"?
    
    In other words, when we consider the user in the design of our
    equipment, do we have any particular responsibility to consider
    third parties who might be affected by that usage?
    
    As a case in point, I was very interested in a recent news item that
    described the first deaths directly attributable to a software flaw (or
    so it was claimed).  I'm not sure if anyone else saw it or can supply
    more details, but the following is my recollection:
    
    there is a computer-controlled radiation therapy system manufactured in
    Canada that has caused at least one fatality plus several serious
    radiation burns due to a bug in the control program.  The machine
    provides two different operating modes, I think they are an electron
    beam and X-rays, at various power settings.  The particular problem
    occured when a "fast typing" operator (that was the description used in
    the press report I saw) entered some particular sequence of commands, I
    think involving changing both mode and power.  The result was that the
    machine actually delivered the maximum power setting, in the wrong
    mode!  Before it was found it killed one patient and literally burned
    a hole in the back of another.
    
    As one who develops and maintains a real-time operating system used for
    process control I believe that I can claim some familiarity with both
    similar system environments and one of Digital's development
    organizations providing them.  My intuition is that the bug occured
    because of synchronization problems within the control program,
    probably associated with type-ahead command input handling.  I know
    that RSX (my job area) could easily be the appropriate vehicle for
    this type of application, and that both of those areas sometimes
    present difficulty for RSX users.  My consideration of this is usually
    predicated upon the expectation that *my* users are applications
    programmers who can be expected to be technically knowledgable and
    thus able to figure it out given some reasonable assistance (good
    design and documentation).  
    
    But the appropriate level of care might be different if my perception
    were that a difficult to understand design might lead to subtle
    application errors that *KILL PEOPLE* - it might make me take a
    lot more care about making the design easy to understand, and to
    review the documentation with a different eye.
    
    How well do we as a company do on this?  I think better than most
    but certainly not as good as we could...
    
    We need to be more aware of the possible effects of our business
    actions on others, and "do the right thing".  As an example, I recently
    handled an SPR that arrived with the notation "FYI - unsupported
    version" but described an interesting problem.  In our group review
    someone asked the customer identity, and upon learning it was Dade
    County Florida wondered if it could be their 911 emergency-call
    handling system.  It in fact was, the bug was not impacting their
    production system but did impact their overall operation - and turned
    out to be a subtle effect of task scheduling priorities (not all
    that dissimilar to the "killer bug" :-( that seems to reflect a
    gap between the conceptions of the o.s. developers and applications
    designers.  It wasn't critical, this time...
    
    ...but it just makes me wonder how many others lie in wait????
     
245.15Ethics is part of doing the right thingODIXIE::VICKERSA note&#039;s a horrible thing to wasteFri Jan 09 1987 23:5232
    Reading .0 took me back many years to Freshman Engineering 101 where
    we were terrorized by the films on Galloping Gurdey and the Comet
    metal fatigue crashes.  Every engineer that I've ever talked to
    about this has been deeply imprinted by these lessons.  The object
    lesson was both the theme of public safety but also NEVER ASSUMING
    ANYTHING.
    
    In 'hard' engineering you use safety factors to protect the public
    from itself.  Making a load bearing member twice as the highest
    expected load, for instance.  Several replies have discussed how
    difficult this gets as products become more and more complex.
    
    Software makes this even more complex not only because of the
    complexity but also the enormous variety of ways that it may be
    altered after leaving us.  How do you protect users from themselves?
    On a simplistic basis you do things like asking "Do you really want
    to delete ...?"
    
    Clearly, we are morally obligated to produce as safe a product as
    we can.  It is part of doing the RIGHT thing isn't it?  The trick
    is to know where the subsequent users will attack.  Where are the
    weak points?
    
    I submit that our approach of building simple building block systems
    provides a higher degree of safety and reduced risks than other
    vendors who tend to take a more vertical approach.
    
    I believe that .0 is something that we all need to bear in minds
    during our work.  It's like the 15 commandments of Digital and a
    part of doing the RIGHT thing.
    
    Don
245.16PSW::WINALSKIPaul S. WinalskiSun Jan 11 1987 18:2474
	"Build a product that any idiot can use, and only an idiot will want
	to use it"
				- attributed to C. Gordon Bell


.-1 states:

>    Clearly, we are morally obligated to produce as safe a product as
>    we can.

I disagree.  Safety is not the end-all and be-all of product development.
It is an important quality dimension, but it is not overriding.  Often it
must be traded off against other aspects of product quality, such as
ease-of-use.

Consider the lawnmower example.  In their effort to "produce as safe a product
as we can," the lawnmower engineers have greatly reduced the ease-of-use of
their product, many (myself and John Covert included) would say unacceptably
so.

Ultimately, product designs always assume a certain level of intelligence and
common sense on the part of a user.  Designers of hedge clippers assume that
nobody will try to use them for shaving.  The designers of clothes driers
assume that nobody will use them to dry off their infant.  The "produce as
safe a product as we can" philosophy would dictate that heartbeat or crying
detectors be put in all washing machines just so that, if somebody WERE to
put their infant in and turn the machine on, it would detect this and stop.

Engineering is the business of making informed trade-offs.  Design-for-safety
is just as subject to trade-offs as any other aspect of Engineering.  The
degree to which saftety is considered in product design should depend on
both the audience of the product and the potential damage from misuse.
Products intended for highly-trained professionals don't require as much
design-for-safety as those for unskilled or casual-use consumers.  Products
that can maim or kill if misused require more design-for-safety, especially
if they are intended for use by the unskilled or casual consumer.

--PSW

P.S. - The computer in the Therac radiation therapy units that zapped two
       people to death is a PDP-11.  I'm almost certain that the operating
       system involved is RT-11.  It is clear to me (with 20/20 hindsight)
       that one essential safety feature missing from the applications
       design of the Therac was a final consistency check of both the
       operating parameters input by the operator and of the current settings
       of the hardware.  The radiation overdoses occurred because the
       operator had set the exposure level, selected X-rays instead of
       electron beam, then noticed the error and immediately corrected it.
       If the machine is to produce X-rays, an X-ray target is dropped into
       the particle beam path, and consequently the particle beam must be
       of much higher intensity to strike the target and produce X-rays of
       the specified dose.  If the pure electron beam is desired, no target
       is in the beam path and the beam is much, much weaker.  In this case,
       the beam intensity was set for X-rays but the operator's typed-ahead
       correction to electron beam caused the target to be lifted out of the
       beam path.  That was the bug.  The result was that a beam several
       thousand times more powerful than desired was fired.  Hardware sensors
       immediately detected this and put a lead shield up to cut off the beam,
       but this mechanical action of course happens way too late to prevent
       a fatal dose of radiation to be delivered to the patient.
       
       Just before the software turns on the beam, there should have been
       consistency checks to determine that (1) the dose requested by the
       operator is reasonable, and that (2) the hardware settings are all
       consistent with producing that dose.  In the above case, such a check
       would have found that the beam intensity was set at the X-ray level
       but the X-ray target wasn't in place.
       
       Another problem with the Therac software's design was lack of meaningful
       diagnostic feedback.  The error message displayed on the Therac
       console after the fatal irradiations was "MALFUNCTION 70."  Several
       weeks went by before the radiologists and Therac engineers could figure
       out exactly what that meant and what the circumstances were that
       produced it.
245.17I sit corrected - not precise enoughATLAST::VICKERSA note&#039;s a horrible thing to wasteSun Jan 11 1987 20:0921
    Actually the point that I was trying to make is that we should be
    aware of safety and potential misuse just as we are aware of the
    other factors when we go through the various early stages of product
    development.  Possibly the following softens with a qualifier:
    
    >    Clearly, we are morally obligated to produce as safe a product as
    >    we _reasonably_ can.

    I fully concur with the view that products designed for idiots
    are only suitable for idiots.  I believe that this one of the major
    reasons for Digital's success - we expect our customers to be
    intelligent and treat them in that manner.  Much like being a good
    parent.

    As always, we need to think about the various users of our products.
    As we move into more and more end user markets this will become
    more important and .0 even more important.
    
    Nobody said it was going to be easy - just a lot of fun.
    
    Don
245.18CSSE32::MERMELLAndy MermellSun Jan 11 1987 20:2227
re: .-1 

The postscript seems to contradict the impact of the reply
itself. The X-ray machine was presumably operated by a trained
operator, not a casual consumer. 

What makes the story particularly relevant is that the problem
could be one that might be described as a "human interface" bug. 
A couple of years ago very few people would even imagine that
there could be such a thing as a human interface bug. 

By the way, we should be careful in discussing this
incident, now that we know a PDP-11 was involved.  Lord knows
what could get used how in a lawsuit.

For what it's worth, when I think about engineering safety, I 
don't get angry at over-protected lawnmowers.  I get angry at
things like the decision to save a few million dollars (x cents
times y million cars) by leaving out the plastic inserts which 
prevent the gas tank from bursting in flame when rear-ended.  Or
at the asbestos company which for decades suppressed medical evidence 
about the effects of asbestos.

In other words, right now I'm more worried about under-regulation 
than over-regulation.  When the government gets too concerned
about safety and needs more laissez-faire, I'll be the first
to let you know!
245.19Placing Value on Human lifeDEREP::JONGSteve Jong/NaC PubsMon Jan 12 1987 09:3215
    [Re: .16]:
    
    In a follow-up entry to the RISKS Digest entry listed in .0, someone
    raised as a concern just the point made in .16 -- that safety can
    be assigned a dollar value.  Weighing the potential cost of lawsuits
    against the estimate cost to make a product safer seemed to the
    contributor, and to me, as an odious calculation.
    
    	"If you don't install this safety interlock, we'll be sued for
    a million bucks!"
    
    	"But Griswold, installing your interlock will cost us a million
    and a HALF."
    
    P.S.:  I don't recall where I read that point.
245.20Perfect safety is impossibleWHYVAX::HETRICKBrian HetrickMon Jan 12 1987 16:41153
	  I believe there are at least four questions of interest here:

       o  Legal liability, should a misengineered product cause damage or
	  death;
       o  Moral responsibility, should a misengineered product cause damage
	  or death;
       o  Legal liability, should a correctly engineered product cause
	  damage or death;
       o  Moral responsibility, should a correctly engineered product cause
	  damage or death.

	  Legal liablity for misengineered products is a well-known area:
     awards for negligent engineering have been made, and continue to be
     made.  The only item of note is that computer systems have
     historically tended to be judged by the 'strict liability' theory for
     negligence in computer system construction, where the systems
     manufacturer is liable only for the cost of the system, rather than
     for consequential, incidental, punitive, or other damages.  However,
     recent cases appear to be reversing the application of the strict
     liability theory, making computer systems manufacturers liable for
     consequential, incidental, punitive, or other damages.

	  Legal liablity for correctly engineered products is different.  I
     believe there are two different situations covered here:  products
     which are designed to damage the user, and products which are not
     designed to avoid damaging the user.

	  I know of no court cases dealing with situations where the
     product was designed to damage the user.  There may soon be one:  an
     interesting description of such a product is in this month's BYTE
     magazine, in one of the letters to Jerry Pournelle, where a user
     describes how a clone BIOS noted that its 'logo' (sign on message?)
     had been changed, and deliberately destroyed the data stored on the
     user's disks.  In the described situation, the logo was legitimately
     changed by a user who needed to modify some device description tables
     held in the ROM, and then needed to change another byte -- any other
     byte -- to make the ROM's checksum correct.  Not being an attorney, I
     cannot have an opinion:  but as an interested layman I speculate that
     the manufacturer of the BIOS may have acted unwisely in this case, and
     made itself liable for civil actions.  It is intriguing to speculate
     as to the manufacturer's potential criminal liability should this BIOS
     find application in a computerized life support system or other
     life-critical application.

	  More usual is a situation where a product is not designed to
     avoid damaging the user in all situations.  In this situation, a
     potential feature of a product is not included, due to engineering
     considerations, and the lack of such that feature causes damage.  The
     engineering is not in this case necessarily negligent:  the feature is
     excluded deliberately rather than by accident.  Consider a
     hypothetical example of a voice-actuated door lock:  suppose the lock
     is such that should the legitimate user have laryngitis, the lock will
     not operate.  Suppose further such a situation occurs on a cold day
     and the user suffers frostbite as a result of being locked out of his
     or her house.  Whether the lock manufacturer would be liable in such a
     case, for failing to provide a 'backup' opening mechanism, is
     questionable -- especially should the manufacturer include a prominent
     disclaimer with the lock, clearly stating that users suffering from
     laryngitis, sinus conditions, allergies, or other conditions affecting
     voice volume or timber may not be able to reliably cause the lock to
     operate.

	  This situation is of especial interest to engineers.  Note that,
     in technical terms, the lock has performed properly:  it has not
     operated for an unrecognized voice.  But in human terms, it has
     performed improperly:  it has not operated for its legitimate user.
     Equating 'legitimate user' with 'recognized voice' is a feature (in
     fact, likely the major selling point) of the lock.  The product is
     performing its function:  but its function is inappropriate to the
     situation.

	  Can a manufacturer be legally liable because a product acts
     exactly as intended?

	  Until recently, the answer has been 'no.'  Even intrinsically
     dangerous products can be manufactured and sold without liablity,
     should such manufacture and sale be otherwise lawful.  Consider the
     recent cases where awards have been sought from cigarette
     manufacturers based on the intrinsic risk associated with cigarette
     smoking.  Such awards have been repeatedly denied:  regulation of
     enterprises with intrinsic risk has traditionally been the subject of
     legislation, rather than of case law, and the manufacture and sale of
     cigarettes is undeniable lawful.  Only when a manufacturer is
     negligent, and the risk associated with the product exceeds the
     intrinsic risk associated with the class of product, is there liablity
     for damage.

	  This traditional exemption of intrinsic risk from product
     liablity actions is being vigorously attacked.  Cigarette and small
     arms manufacturers in particular are being sued for liability for
     products that are not defective, but rather intrinsically dangerous.
     Should this legal onslaught succeed, the implications are frightening
     for all engineers:  courts may hold manufacturers and sellers, and
     possibly the designing engineers, liable should the product be judged
     incorrect.  That a product is 'incorrect' is much more subject to
     interpretation than is that a product is 'defective.'

	  As Paul Winalski notes in .16, "product designs always assume a
     certain level of intelligence and common sense on the part of a user."
     This is possible only because the courts have adopted the 'reasonable
     man' theory:  any person is supposed to be able to forsee the
     consequences of his actions as well as a 'reasonable man,' and one
     need not protect against misuse which the 'reasonable man' would not
     do.  This legal theory, together with the legislative rather than
     juridical regulation of intrinsic risk, permits much of modern life:
     for example, it is possible to design an 'intrinsically safe'
     electrical outlet, one which *cannot* deliver an electrical shock to
     an animal or a human being, one which *cannot* produce a spark and
     detonate explosive fumes, one where the prongs of the mating plug
     *cannot* be exposed.  But such an outlet would be almost impossible to
     use, would be perhaps a cubic foot in volume, would not function were
     an animal within its sensing distance, and would cost perhaps two
     orders of magnitude more than the $1.30 or so an electrical outlet
     currently costs.

	  Despite this, the outlet would not in fact be intrinsically safe.
     It would be intrinsically safe against *currently recognized risks.*
     Perhaps the low level of ions given off by electrical outlets are a
     deadly risk -- if so, this 'intrinsically safe' outlet might not
     protect against that.

	  Imagine life where all constructs had to be 'intrinsically safe,'
     proof against *all* currently recongized risks.

	  Regardless of questions of legal liablity, I believe it immoral
     to design an artifact which is intrinsically unsafe.  However, perfect
     safety cannot be achieved without excessive cost, if at all.  There
     must therefore at some point be a balance between product cost and
     product safety.  Such a balancing point would depend upon a variety of
     factors (product cost, cost of competing products, intended use,
     intended user, actual use, actual user, etc.), most of which are *not
     known* to the designing engineer.  Given that, I would suggest that
     products be designed with conservative safety guidelines -- and adjust
     those guidelines guided by experience.

	  This may be experimenting on an unsuspecting public, but if so it
     is unavoidable.  In defense of the engineer, I note that ever since
     the industrial revolution, and perhaps before, mankind has chosen to
     perform actions without judging, or knowing, or caring to investigate,
     the risks.  Consider the artificial additives in your diet:  these
     chemicals do not occur naturally in the food chain, most of them have
     been synthesized only in the last ten years, most of them have been
     tested only for animal toxcity;  what is their long-term effect?  We
     may -- may, not will -- know in another fifty years.  Disasterous
     effects that appear only after a *very* long time are not impossible.
     Consider coal:  it took 200 YEARS to discover that burning coal
     produces acid rain which destroys croplands.

	  Perfect safety is impossible.  The best we can do is to protect
     against the risks we can recognize as such.  Even there, we can
     protect against only the most common or most dangerous of these risks.

				  Brian Hetrick
245.21an experience with imperfect safety...RDVAX::KENNEDYtime for cool changeMon Jan 12 1987 19:0431
    Re .20 and others:
    
    To pick up on Brian's comments, I had experience with a 'product
    not designed to avoid safety problems': 
    
    In the mid '70s a scare arose concerning asbestos in hairdryers.
    I was with a reputable company which prided itself not only on its
    ethics but on the 'consumer perceived quality'. Only a few thousand
    dryers had been manufactured using asbestos eight years before;
    however since that time the product became a commodity, the
    price-oriented knockoffs appeared, and the market was flooded with
    a real safety problem. The Consumer Product Safety Commission, although
    well intentioned, was without clear leadership and rather than invoking
    court suits, began Congressional hearings. Our engineers were found
    to have been the ONLY people in the USA ever to have tested emitted
    particles and therefore became the drafters of criteria for an edict
    of recall. Unfortunately since the data was sparse on the relative
    hazard to humans, the recall involved any millions of dryers
    manufactured by reputable folks who switched to better designs.
    
    A point: I fear that an unknown issue could arise due to the
    cost-oriented clone manufaturers 'bending' the reputation of good
    designs in our business, and we could incur harm. This past experience
    would teach to:
    
    o document everything well in a logical manner
    o be supportive of management and legal staff should something arise,
      because they'll need it  
    o do not assume that design/safety decisions may hinge on legal
      precedent, especially with changing technologies
    
245.22READ DEC STDS 060 & 119BARNUM::RAINSTue Jan 13 1987 07:398
    Before this note flames any further I suggest that you all read DEC STDS
    060 and 119. Digital has people that are well versed in the legal and
    technical aspects of this subject (i.e. product liability). There
    is a well thought out, documented process in place to address these
    issues at Digital.
    
    -Mike Rains
    
245.23but don't stop talking about such thingsBEING::MCCULLEYRSX ProTue Jan 13 1987 12:2148
    first, thanks to PSW for the info in the PS to .16 - I found it
    both interesting and very valuable to me.
    
    re .22 - I hope that this note will contiue to "flame" (actually,
    I thought the contributions too well reasoned and objective to be
    considered flames) regardless of DEC STDS 060 and 119.  I concur
    that all participants should read them, I had not been aware of
    them until this mention so have not yet read them myself but will
    soon do so.  So if nothing else this discussion might contribute
    to awareness of applicable DEC STDS.  Beyond that I think there
    is a real need for each of us to examine these issues for ourselves,
    and this kind of thought-provoking dialogue is certainly helpful.
    I see the "well thought out, documented process" referred to in .22 as
    a formal organizational structure complementary to the informal
    discussion of issues in conferences such as this one, and would
    be very disturbed if the existence of such formal structures were
    deemed inimical to open conference discussions (I hope that's not
    what was suggested in .22 but it seems too close for comfort to
    me).
    

    re .18:
.18>    By the way, we should be careful in discussing this
.18>    incident, now that we know a PDP-11 was involved.  Lord knows what
.18>    could get used how in a lawsuit. 

    This concerns the hell out of me - it is a pragmatic reality that we
    must be concerned with legal consequences, but it could be just as
    devasting to let this fear interfere with communication and then to
    discover that information was known to one part of the organization but
    not disseminated and that problems (and potential legal liability)
    resulted.  I would strongly urge that thoughtful communication about
    such experiences continue openly, it is the only way for others to
    learn and thus to avoid repeating similar mistakes themselves. (This is
    why I thanked PSW for the postscript to .16).  I'd also point out that
    the hiding of health consequences from asbestos mentioned in an earlier
    response did not reduce consequences, if anything it aggravated them.
    Based on that, I would perceive "the right thing" to be informed,
    objective discussion of potential hazards because, regardless of
    potential negative consequences of such discussions there are more
    serious problems with not exploring hazardous issues.  BTW, from recent
    news reports I perceive the health studies at the Hudson plant to show
    this approach.
    
    Of course, the open discussion I advocate must be subject to corporate
    policies, DEC STDS, and relevant points of law - so it certainly
    is *NOT* a topic for irresponsible flaming!
    
245.24No Offense IntendedBARNUM::RAINSTue Jan 13 1987 12:5210
    	Re. .23- My point is that many of the concerns, questions, etc.
    contained in the replies to this topic will be assuaged by careful
    study of the two DEC stds. that I mentioned. They were developed
    to address exactly the same ethical, legal, and practical issues being
    discussed here. If engineers want to "do the right thing", the first
    step is to understand the "formal" process that exists to help them 
    do so. 
    
    
    
245.25I typed WHAT?MINAR::BISHOPTue Jan 13 1987 20:008
    Do the standards mention the desirability of keeping a record
    or "audit trail" of user inputs?  It seems that in cases like
    the death-from-x-rays one mentioned before, having the exact
    character sequence of the operator's input would be a big help
    in A) debugging and B) assigning guilt.  Obviously this record
    would not be user-deletable...
    
    			-John Bishop
245.26No Problem- No BlameBARNUM::RAINSWed Jan 14 1987 08:499
    1. Blame- In general, the standards emphasize designing to prevent
    problems. Spending time on designing ways to fix the blame after
    a problem occurs isn't generally considered constructive. 
                                                              
    2. Diagnostics- Quicker diagnosis of the reasons why the first incident
    occurred might have prevented the second. I don't know if any
    diagnostics could have completely recreated the failure scenario.
    The scenario appeared pretty Byzantine in nature.
    
245.27PSW::WINALSKIPaul S. WinalskiWed Jan 14 1987 11:0623
RE: .18

The postscript is not contradictory with the main text of .16.

After the typed-ahead correction was made, the operator's console said that
the machine was set to deliver an electron beam, at (say) .2 rads exposure,
when in fact the beam intensity was (say) 200 rads.  There is no way that
any operator, novice or experienced, could tell from the console display
that anything was wrong.  This is just a simple, out-and-out bug where the
displayed information did not match reality, and where, in fact, the
applications software had put the hardware into a should-not-occur state
but failed to detect this.

The point of the main text is that safety design always has to take into
account the audience of the product and always makes assumptions about what
"the reasonable man" will do.  The postscript illustrates a case where, due
to a bug in the applications software, no man, no matter how reasonable,
could have told that the machine was set improperly until after it had killed
the patient.  The postscript also suggests a final safety check that could
have caught this problem after it occurred but before it caused life-threatening
damage.

--PSW
245.28CSSE32::MERMELLAndy MermellSat Jan 17 1987 11:3914
re: .23

I completely agree that open discussion is needed.  What I was concerned about
was the hypothetical scenario where a lawyer in a large damage suit
against DEC introduces a copy of a Note which reads "it's obvious that the
PDP-11 killed the patient and the accident could have been prevented by
changes to its microcode" and says "here's what an engineer at DEC thinks."
Of course such a note would be wrong and a complete flame, but who knows what a
jury would make of it.  Therefore, in this case, where at least the potential 
for a lawsuit exists, think before you flame.

re: .27  

Yup, I misread parts of .16.
245.29What about true AI?SLDA::OPPWed Jan 28 1987 12:5716
      If you want an interesting engineering ethics question, consider
    this possible future scenario.  Assume that AI concepts reach the
    point where constructing equivalent-to-human-intelligence machines
    is possible.  Should such a machine (or robot) be built?  If it
    is built, how do you train it to be protective of human life, etc.?
    Are Asimov's Three Laws of Robotics sufficient?  Is Hogan's training
    method in "The Two Faces of Tommorrow" applicable?  What rights
    would have such a machine have in a legal, constitutional sense?
    
      I believe that ultimately these are the kinds of moralistic
    questions the computer & robotics industries will have to deal
    with.  "Doing the right thing" was never easy and doesn't seem
    to be getting any easier.
    
    Greg      SLDA::OPP
    
245.30Strong Stuff, indeedBENWAY::HAMBYThu Jan 29 1987 16:522
    I strongly recommend Joseph Weizenbaum's "Computer Power and Human
    Reason" to anyone interested in the issues raised in .29.