[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference turris::languages

Title:Languages
Notice:Speaking In Tongues
Moderator:TLE::TOKLAS::FELDMAN
Created:Sat Jan 25 1986
Last Modified:Wed May 21 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:394
Total number of notes:2683

163.0. "Parallel Processing Language Wanted" by ATLAST::WELTON (Blue Romeo *) Thu Jan 28 1988 15:18

    Can someone tell me if they have ever seen a language with specific
    commands or syntax designed to support parallel processing?
    
    douglas
    
T.RTitleUserPersonal
Name
DateLines
163.1Depending on your definition, Ada is designed that wayTLE::MEIERBill Meier - VAX AdaThu Jan 28 1988 15:4516
    Ada with multitasking has this ability. Several vendors (but none
    on VMS) have distributed Ada tasks on different (typically tightly
    coupled) processors.
    
    DEC had a RAD project that also demonstrated the feasibility of it
    with VAX Ada. It may become part of the VAX Ada product some day.
    
    Ada provides the rendenvous (synchronization) you need built in
    the language (but at some cost). And, the design and execution of
    your tasking Ada program is independent of whether it can (or is)
    running on multiple processors. So, you can develop today, and
    run tomorrow on a multi processor!
    
    However, Ada tasks do not go down to the granularity that a well
    decomposed FORTRAN program might. The benefits/tradeoffs depend
    on the nature of the application.
163.2TLE::JONANInto the Heart of the SunriseThu Jan 28 1988 18:0715
    Re: .0
    
    There are, if not "zillions" then at least, quite a few.  I would
    definitely count Ada among them.  Also, two highly influential from
    Pier Brinch-Hansen: Concurrent Pascal and DSP (Distributed Sequential
    Processes);  CSP (Communicating Sequential Processes) from Hoare;
    SmallTalk - you get it for free (the user that is...);  MODULA-2,
    has the primitives to build whatever model your heart desires (usually
    comes with a version of monitors as part of the library, but there
    are CSP and channel based ones available);  Some LISPs, in particular
    VAX Common LISP (I'm not sure if this is specified in CLtL or not...);
    Various flavours of parallel PROLOG (PARLOG);  many more, including
    umpteen wahzooly goof-ball "parallel programming teaching" languages.
    
    /Jon
163.3OCCAMHACKIN::MACKINJim Mackin, VAX PrologThu Jan 28 1988 19:082
    I belive that OCCAM (for Transputers) is also designed specifically
    to handle parallel processors. 
163.4TLE::NELSONFri Jan 29 1988 10:0318
    There is an interesting approach to programming huge numbers of
    processors in the "Connection Machine" being built and sold
    by Thinking Machines.  Interfaces have been defined in C and Lisp;
    I'm more familiar with the Lisp version.  The idea is that instead
    of producing more language commands, what is provided is a new data
    type, and means of manipulating it, which represents an "array"
    of processors.  So you can say "add these two things" but get the
    effect of adding lots of pairs of numbers at once.  There is a book by
    Danny Hillis about the machine, and Guy Steele has written and spoken
    about the language modifications made.
    
    There is also experimentation with something called "futures" in
    Multilisp.  There was a paper in the SIGPLAN notices, I think, about
    18 months ago.  A future is something which represents an evaluable
    expression, but which isn't guaranteed to be finished evaluating
    until someone tries to use the result of the future.
    
    Beryl
163.5Some extra detail.STAR::HEERMANCEMartin, Bugs 5 - Martin 0Fri Jan 29 1988 10:1922
    Although some of this was mentioned in the previous notes I'm
    going to add some detail.
    
    Concurrent Pascal has the COBEGIN . . . COEND construct.  All the
    statements between may be executed in parallel.  If a subroutine
    is called several times inside such a structure than several of
    it execute in parallel.  Critical regions are used to protect data
    structures.
    
    Some C compilers have the fork and join routines. A fork causes the
    current thread to split and become several threads of execution.  A
    join forces a thread to stall and wait for others to complete.  A
    semaphore or critical region can be used to provide interlock.
    
    Modula II has COBEGIN and COEND.  A monitor is used to provide
    interlock.  A monitor acts like a room to hold your data with only
    one door.  Only one process can enter the room all other wait in
    line outside.  Monitors also provide wait queues for waiting on
    the availabiltiy of a resource.
    
    I have also used a crackpot version of FOURTH which provided con-
    currency.
163.6Also Algol 68COMICS::DEMORGANRichard De Morgan, UK CSC/CSFri Jan 29 1988 11:094
    Algol 68 has such a facility via the PAR clause. Unfortunately it's
    rather limited as it must be written with fixed serial clauses.
    This can be got round by (say) passing an array of REF PROCs to
    an activating procedure that invokes PAR.
163.7A not very practical language?TLE::RMEYERSRandy MeyersFri Jan 29 1988 19:3321
My favorite parallel processing language was one a heard about in college
in my second or third programming class.  My professor had gotten his
PhD the year before on parallel processing, and was full of early 70's
parallel processing arcana.  One pp language that had been devised was
the single assignment language.  The idea was that any variable in the
program could only be assigned to once.  Each assignment expression in
the program could then be assigned to a different processor, and the
processor would stall waiting for all the variables on the right-hand
side of its assignment to become defined.  As soon as they all did,
the processor would then evaluate the expression and perform its
assignment.

The interesting thing about single assignment languages is that it
didn't matter the ordering of the assignment statements in the program.
Since this was long enough ago that I was learning programming by
entering my program on cards to a batch system, this made quite
an impression on me.  The idea of being able to take those card
decks, shuffle them, and have the program still work...

Single assignment languages are a major industry today.  They are
called spreadsheets in "natural" recalculation mode.
163.8TLE::JONANInto the Heart of the SunriseFri Jan 29 1988 19:4194
>    Some C compilers have the fork and join routines. A fork causes the

    I don't think this satisfies the requirements of .0 as it is just
    an *explicit* call out to an *OS* (UNIX) primitive and so any language
    that can call kernel routines can do this.  Also, as "fork" is a
    completely uncontrolled form of providing ||-ism it doesn't seem
    appropriate for this reason either (well, there is the JOIN concept,
    but the two together are much less than the semaphore model [itself
    no great shakes...]).


>    Concurrent Pascal has the COBEGIN . . . COEND construct.  All the

    Concurrent Pascal does *not* have these constructs.  It has the TYPEs,
    PROCESS, MONITOR and QUEUE, and the statements INIT, DELAY and CONTINUE
    for implementing ||-programming.  Process types are defined by the
    programmer which in turn are used to declare variables denoting program
    units that are to execute in parallel; after INITing them they go on their
    merry way.  Monitor types are used to declare monitors which are the means
    by (through) which processes can communicate with one another (the monitor
    type is just an intrinsic implementation of the monitor model in the
    language).  Syntax:

	process_type ::= TYPE id = PROCESS "(" formals ")"
				       local_decls
				       body_block
				   END
	monitor_type ::= TYPE id = MONITOR "(" formals ")"
				       local_decls
				       entry_decls
				   BEGIN
				       initialization_stmts
				   END

    


>    Modula II has COBEGIN and COEND.  A monitor is used to provide

    MODULA-2 does *not* have these statements or construct!  There are
    library modules available that implement this model but it is not
    the only one available and certainly not intrinsic.  As I previously
    indicated the only intrinsics for ||-ism are those that reside in the
    SYSTEM module: PROCESS, NEWPROCESS, TRANSFER, and IOTRANSFER.  Wirth's
    "standard" Processes MODULE implements the monitor model (though it is
    pretty crude in comparison to many others available [and included in
    some implementations]).  Though there is some controversy over this
    set of primitives (in particular IOTRANSFER) they allow you to easily
    build any ||-programming model(s) you want, encapsulate it in a module(s),
    and stick it "on the shelf" for later use.  For example, they allow the
    easy and elegant construction of the structures and constructs used in
    Concurrent Pascal.  Also, notice that the COBEGIN..COEND construct is a
    rather strict form of "structured" ||-programming (compare with the "fork"
    construct).  It is a *static* representation of the dynamic activity in
    the program.  Only those elements explicitly listed in between these
    brackets execute in parallel and they all must finish before the construct
    itself exits.  However, it says *nothing* about any *interaction* between
    these elements during their execution!


>    Algol 68 has such a facility via the PAR clause. Unfortunately it's

    ALGOL-68 also has a SEMA type and the DOWN and UP operations for intrinsic
    implementation of the semaphore model (the worst [ie. most dangerous]
    of the various ||-programming models).  There are basically two flavours
    of the COBEGIN..COEND construct available: one where the elements listed
    (comma separated list between parens?) are not meant to have any interaction
    during execution and one where they are.  The one where interaction is
    intended is indicated by means of the PAR clause: you prepend PAR to the
    specified list.  I'm not sure of the specifics of this as you,
    Joe-programmer, must explicitly control the interaction by means of the
    SEMA stuff.


    General:

    The semaphore and monitor models of ||-ism both require and assume a
    shared memory environment in which to operate.  For distributed processing
    (where there are multiple CPUs with their own *private* memory only) you
    need some form of the message-passing model.  There is alot more involved
    in the particulars of this model than that of the others, as you might
    well expect.  Examples of some languages which implement (with varying
    degrees of success) an intrinsic message passing model are: CSP, DP,
    SmallTalk, Occam, and Ada (in no particular order...).


    Re: .7
    
>  an impression on me.  The idea of being able to take those card
>  decks, shuffle them, and have the program still work...

    Wow!  Those were the days: *physical* data!! :-)

    /Jon
163.9Closed, collateral and parallel clauses in Algol-68DENTON::AMARTINAlan H. MartinThu Feb 04 1988 20:4523
Re .8:

As I recall reading, the three types of composers in Algol-68 you are thinking
of are:

Closed clauses - (e1; e2; ...; en) or begin e1; e2; ...; en end.  They are
just like Bliss compound statements.

Collateral clauses - (e1, e2, ..., en) or begin e1, e2, ..., en end. The
expressions ei and their subexpressions are evaluated in some order, but the
compiler gets to pick the order.  This is like the rules for evaluation of
operands in expressions and arguments in function calls in C (and other
languages, no doubt).  However, while the order of evaluation is not known
by the programmer, only one thing should be happenning at a time.  It is no more
a matter of parallelism than the C question "What does printf("%d %d\n", i++, i)
print" is.  (Note that unlike at least one other language with similar appearing
syntax, the Algol-68 expression (i := j, j := i) is not guaranteed to exchange
the values of i and j, although certain collateral evaluations of it would
coincidentally have that effect).

Parallel clauses - par(e1, e2, ..., en) or par begin e1, e2, ..., en end.
The expressions ei may be evaluated in parallel.  All bets are off.
				/AHM
163.10Thanks, but...TLE::JONANInto the Heart of the SunriseFri Feb 05 1988 12:3912
    Re: .9
    
    Thanks for clearing this up.  The two cases I was thinking of are
    the "collateral" and parallel clauses.  However, I guess I'm now
    unsure of the point of collateral clauses.  I'm thinking of why
    it was thought that such a thing as this would be useful to the
    *programmer* (I understand the rational for picking evaluation orders
    for argument passing and expression evaluations - but these concern
    compiler implementation decisions/tradeoffs).
    
    /Jon
163.11A use for collateral clausesDENTON::AMARTINAlan H. MartinFri Feb 05 1988 18:0714
Re .10:

Well, I don't know what motivated the authors of Algol-68, but I believe that a
collateral clause could be useful if you have some code which you can prove is
insensitive to assignment order, and you are writing it for a pipelined machine
which can benefit from instruction scheduling.  Now, a good compiler for such a
machine will truck around proving that it can reorder operations to speed the
program up.  However, it might not hurt to put a feature in the language that
lets you give the compiler a hint, as long as you don't blow it by lying to the
compiler (compare this to the new noalias type specifier in the draft proposed
ANSI C standard).  In fact, if you are generating Algol-68 as object code, it
might be a downright boon, since the pre-compiler might take on the work of
proving that expressions are not sensitive to execution order.
				/AHM
163.12Probably inappropriate...TLE::JONANInto the Heart of the SunriseSun Feb 07 1988 12:4821
    Re: .11

> Now, a good compiler for such a machine will truck around proving that it
> can reorder operations to speed the program up.  However, it might not hurt
> to put a feature in the language that lets you give the compiler a hint, as
> long as you don't blow it by lying to the

   While I thought something like your explanation would be "the" justification,
   I tend to go along more with the above passage, i.e., things of this sort
   *should* be left to automatic means (translators, interpreters...).  First,
   it is likely to be *safer*, as you indicate.  Second, it is likely that
   Joe-average user will not be able to do as good a job as a good (not even
   great) compiler.  Third, this sort of thing *does not* belong on "the
   shoulders" of the programmer - why should he have to worry about how his
   program will be staged to run on *some* target machine.  After all, he's
   worrying about how to solve *his* problem, not the best way to structure
   his code to run on machine X...

   /Jon
    
163.13Probably partitions the Real Programmers and Quiche EatersDENTON::AMARTINAlan H. MartinSun Feb 07 1988 15:1624
Re .12:

Or perhaps the availability of collateral clauses may just have been an impulse
one someone's part to provide explicit access to semantics which are bundled in
with other features.  Pagan's "A Practical Guide to Algol 68" states that all of
the following constructs are elaborated collaterally:

"
the two sides of an assignation
parts of a declaration separated by commas
operands of a dyadic formula
subscripts or trimmers in a slice
actual parameters in a call
items in a data list in a transput statement
the from, to, and by parts of a loop clause
elements of a row or structure display
"

It appears that closed clauses may appear anywhere a collateral clause can, so
collateral clauses aren't placed on anyone's shoulders.  Actually, I'd expect
that, like the goto, problems would predominate in the code of people who have
no worries at all about using the construct because they don't understand how
to use it properly.
				/AHM
163.14TLE::JONANInto the Heart of the SunriseSun Feb 07 1988 16:158
    Re: .13

> collateral clauses aren't placed on anyone's shoulders.  Actually, I'd expect
    
    Oooops, didn't mean to imply that they were, only that there supposed
    purpose for being shouldn't be so placed :-)
    
    /Jon
163.15from the Informal IntroductionMOIRA::FAIMANOntology Recapitulates PhilologyMon Feb 08 1988 09:3950
    Here's the discussion from the _informal_introduction_to_ALGOL_68_
    (Lindsey and van der Meulen), section 3.7.1 (Collateral clauses):
    
        Collateral clauses include such things as structure-displays
        and row-displays, but the ones we are particularly interested
        in at the moment are void-collateral-clauses.  These consist
        of a list of two or more *void* units separated by commas,
        and enclosed between *begin* and *end*, or between "(" and
        ")":
        
        (E1)	( x:=1; y:=2; z:=3 )
        
        These three statements are elaborated "collaterally".  There
        is not likely to be much gain in using collateral-clauses
        this way unless your hardware contains three central processors
        (so that they can do a statement each), or unless you have
        reason to believe that your compiler is sufficiently clever
        to discover that they can be done more efficiently in an
        order other than that in which they were written down.
        Alternatively, it might be the case that one of the statements
        was likely to get held up awaiting some event in real time
        (transput perhaps), in which case the others would be carrying
        on.  This situation is more likely to arise when parallel
        clauses are used (see next section).  In the meantime we
        must consider exactly what "collateral" means.
        
        ...
        
        Suppose two phrases A and B (it could be more) are to be
        elaborated collaterally.  Then the elaboration of A may be
        merged in time with that of B in a manner left quite undefined
        by the Report.  So long as the elaboration of A has no side
        effect upon that of B, and vice versa, then the manner of
        this merging has no effect on the result---otherwise, anything
        might happen.  NBormally, the two elaborations would proceed
        until both were completed, but if one were terminated by
        a *go to*, then the other would be stopped abruptly at whatever
        stage (if any) it had reached.
        
        In practical compilers, it is probable that A would be
        elaborated first and then B, or vice versa, but one is not
        entitled to make any assumptions based on this.  
        
    The next section, 3.7.2, discusses parallel clauses.  A parallel
    clause is a void-collateral-clause preceded by *par*, and behaves
    much like a collateral clause, except that *sema* variables
    (semaphores, manipulated with Dijkstra's P and V primitives)
    are available to handle synchronization between their branches.
    
    	-Neil
163.16Some more Algol 68 parallelismCOMICS::DEMORGANRichard De Morgan, UK CSC/CSThu Feb 11 1988 03:4836
    The Revised Report, section 3.3 states:
    
    [Collateral-clauses allow an arbitrary mergingof streams of actions.
    Parallel-clauses provide, moreover, levels of coordination for the
    synchronization (10.2.4) of that merging. ...
    
    Example of a parallel-clause which synchonizes eating and speaking:
    
        PROC VOID
    	    east,
    	    speak;
        SEMA
    	    mouth = LEVEL 1;
    
    	PAR
    	    BEGIN
    	    DO
    		DOWN mouth;
    		eat;
    		UP mouth
    	    OD,
    	    DO
    		DOWN mouth;
    		speak;
    		UP mouth
    	    OD
    	END.]

    Note the apparently curious declaration of mouth. This is because
    SEMA is defined as
    
    	MODE SEMA = STRUCT(REF INT f);
    
    and
    
    	OP LEVEL = (INT a) SEMA: (SEMA s; f OF s := HEAP INT := a; s);
163.17SISAL still around?UTRUST::DEHARTOGStill exploring flat ascii-filesThu Feb 11 1988 07:2117
Hi,
	while cleaning up my stuff (moving again) I found the handouts
	from a session at DEES (about a year ago). It was Bill Noyce who
	spoke about Multiprocessor SISAL (Single-Assignment Language).
	At that time it was a research-project but with some features
	to be included in the language at certain times.
	Some features of that language: pure functions (no side effects),
	structures objects (arrays, records), update operations have
	COPY semantics, "else" not optional after "if", implementation
	use reference counts and/or compile-time analysis to avoid copy,
	type "stream" can mimic producer-consumer, main process runs all
	the serial code and a share of parallel code,subprocess(es) run
	a share of parallel code (or are idle), global sections used to
	share memory for heap,stack,control,code.
	
	Guess you have to find out if he and/or his project still exist
	within digital.
163.18TLE::JONANInto the Heart of the SunriseThu Feb 11 1988 12:5510
    Re: -.1

> Guess you have to find out if he and/or his project still exist
> within digital.
    
    He still exists, I talked to him just the other day :-)!  I believe
    that he is now part of the Common Multithread Implementation project
    (CMI).
    
    /Jon
163.19Yes, I still existWIBBIN::NOYCEBill Noyce, Parallel Processing A/DThu Feb 11 1988 13:2211
    Actually, I'm not working (directly) on Multithread, but I do keep
    my fingers in it.  I'm doing various advanced-development things
    related to parallel processing.  :-)
    
    The SISAL project is finished, but it served as a useful prototype
    for the runtime support for manual decomposition in VAX Fortran
    V5.  (Some features of SISAL I had hoped to do, like parallel execution
    of functions that return streams, never happened.)
    
    Readers of this discussion might be interested in the TLE::PPAPPL
    (parallel processing applications) conference.
163.20LLNL on SISAL (also, free Cray 2 time)STAR::PRAETORIUSmwlwwlw&twwltThu Sep 09 1993 15:27171
Path: faui43.informatik.uni-erlangen.de!fauern!Sirius.dfn.de!darwin.sura.net!wupost!usc!elroy.jpl.nasa.gov!ames!sun-barr!lll-winken!sisal.llnl.gov!cann
From: [email protected] (David Cann)
Newsgroups: comp.lang.functional
Subject: Which functional languages are most widely used?
Message-ID: <[email protected]>
Date: 29 Jan 92 22:37:33 GMT
Sender: [email protected]
Organization: Lawrence Livermore National Laboratory
Lines: 159
Nntp-Posting-Host: sisal.llnl.gov


>> which functional languages are most widely used 
>> in the computing community, commercial or otherwise?

>> Say the top 4.

Sisal, Id, and Lisp are high on the list.  What follows is our standard
Sisal overview.  The current sisal compiler is available on sisal.llnl.gov
using anonymous ftp (~ftp/pub/sisal/osc.v10.4.tar.Z).  Release 11.0 will
be available within the next two weeks.  Also find enclosed an announcement
for the Sisal Scientific Computing Initiative offering free Cray 2 time
to those willing to write their applications in Sisal. We currently have
over 33 principle participants, most of which have 1-2 collaborators.
For more information about SSCI, write John Feo at ([email protected]).

Current Sisal performance on the Crays competes well with FORTRAN for most
applications. For example, John Feo has developed a Sisal 1-D FFT functions
that runs as fast as cfft2 on the Cray 2 at NERSC on one head, and gets
goot speedup when using all 4 heads.

Dave Cann ([email protected])

--------

OVERVIEW: The SISAL Project: Summary, Results and Future Directions
          Dave Cann L-306
	  LLNL
	  P.O. Box 808
	  Livermore, CA 94550

 Introduction
   Sisal is a general purpose functional language for parallel numeric
   computation. Functional languages promote the construction of correct 
   parallel programs by isolating the programmer from the complexities of 
   parallel processing. Further, because of their mathematical underpinnings, 
   functional languages enhance the automatic exploitation of parallelism.
   Sisal in particular does not specify a particular execution model, nor
   does it assume the existence of special hardware. For example, Sisal was 
   the language of choice for the Manchester dataflow machine and also runs
   on the Cray X-MP, Y-MP, and Cray 2. A primary goal of the Sisal project 
   is to implement the Sisal programming language on both conventional and 
   distributed memory multiprocessors. Additional project goals include:

    * Define intermediate forms for applicative and functional languages.
    * Develop compiler optimization techniques for high-performance 
      functional computation.
    * Build run time systems that support high-performance functional computing 
      on distributed-memory, massively-parallel systems.
    * Write Sisal applications of interest to specific program groups
      within and outside Lawrence Livermore National Laboratory.

 Motivation for Functional Languages
   Despite the commercial availability of multiprocessor computer systems, the 
   number of parallel scientific and commercial applications in production use 
   today remains virtually zero. For example, at the national laboratories, 
   most multiprocessor systems are used as multiple single-processor machines, 
   greatly reducing their computational power. The reason for the lack of 
   parallel software is that creating correct, determinate parallel programs 
   is arduous and error prone. Three possible solutions have emerged:

    * Develop compilers that automatically parallelize imperative languages.
    * Extend imperative languages with constructs and primitives to express
      parallelism.
    * Develop new languages for parallel computing.

   Even after extensive research and development, automatic parallelizing 
   compilers for imperative languages have not met expectations. The fault
   is not entirely with the compilers themselves. Developed for sequential 
   machines, imperative languages are based on a model of computation that 
   assumes a single processor and a single instruction stream. The model is 
   not a natural candidate for parallel computing, and is not well suited 
   for analysis. While small imperative programs may optimize well,
   larger, more complex codes quickly thwart effective optimization.
   Understanding the behavior of a large imperative code requires global
   analysis, which is complex, time consuming, and conservative.
   If the compiler cannot resolve whether a potential dependency is real
   and will prevent parallelization, it must assume the worst, and accept
   the potential increases in synchronization and communication costs and 
   decreases in parallelization. Because of the many imperative codes in use 
   today, however, these compilers will remain an important alternative for 
   some time to come. Unfortunately, because of the sequential nature of 
   imperative computation, they probably will never represent a long term 
   solution.

   Extending imperative languages with constructs that allow the explicit
   expression of parallelism has proven difficult and error prone.  The
   lack of standardization in this regard has further resulted in
   decreased portability across machines. The extensions often limit programmer 
   productivity and hinder analysis. They fail to separate problem specification
   and implementation, hinder modular design, and inherently hide data 
   dependencies.  The expression of most parallel computations in these 
   languages is verbose and unnatural.  In addition to expressing the algorithm,
   the programmer must encode the program's synchronization and communication 
   operations, ensure data integrity, and safeguard against race conditions.  
   The extra programming complexity and the time-dependent errors exposed by 
   this alternative can frustrate even the most experienced programmers.

   The third alternative, and the one we endorse, is the development
   of new parallel programming languages. Functional languages such as 
   Sisal expose implicit parallelism through data independence, and guarantee 
   determinate results via their side-effect free semantics. A functional 
   program comprises a set of mathematical expressions or mappings (the value 
   of any expression only  depends on the values of its inputs, and not on 
   the order of their definition). Hence, functional languages are referentially
   transparent and the programmer cannot introduce race conditions. The Sisal
   programmer simply expresses the the operations constituting the algorithm.
   The determination of data dependencies, scheduling of operations, the 
   communication of data values, and the synchronization of concurrent 
   operations are the responsibility of the compiler and run time system.  
   The programmer does not and cannot manage these operations.  Sisal programs 
   that run correctly on a single processor are guaranteed to 
   run correctly on any multiprocessor, regardless of architecture.  
   Relieved of the most onerous chores of explicitly parallel programming, 
   the software developer is free to concentrate on algorithm design and 
   application development.

 Current SISAL Status
   Today, most Sisal programs outperform equivalent FORTRAN programs
   compiled using automatic vectorizing and parallelizing software.
   Current targets include machines from Cray, Encore, Alliant, Sequent, Sun,
   DEC, and Thinking Machines. We have implemented a mixed-language interface 
   that allows Sisal programs to call C and FORTRAN and allows C and FORTRAN
   programs to call Sisal.  Currently the interface is being used to reprogram 
   the computational kernel of a production code at Lawrence Livermore National 
   Laboratory. 

Future Directions for SISAL
   We are moving Sisal to the BBN TC2000 multiprocessor at Livermore
   as part of the Massively Parallel Computing Initiative. We hope to 
   leverage the experience we gain from this exercise and apply it to the 
   development of a true distributed memory implementation of Sisal.
   We are also revising the current definition of Sisal. Three important new 
   features are higher-order functions to promote algorithmic abstraction,  
   array operations to simplify algorithm specification, and modules
   to promote sound software engineering principles.


RESEARCH SUMMARY: The Sisal Scientific Computing Initiative

  The Computer Research Group at Lawrence Livermore National Laboratory (LLNL)
  announces the Sisal Scientific Computing Initiative (SSCI).  The Initiative
  will award free Cray 2 time and support to researchers willing to develop
  their applications in Sisal, a functional language for parallel numerical
  computation.  Members of the Computer Research Group will provide free
  educational material, training, consulting, and user services.

  SSCI is an outgrowth of the Sisal Language Project, a collaborative
  effort by Lawrence Livermore National Laboratory and Colorado State
  University and funded in part by the Office of Energy Research (Department
  of Energy), U.S. Army Research Office, and LLNL. Sisal provides a clean
  and natural medium for expressing machine independent, determinate, parallel 
  programs. The cost of writing, debugging, and maintaining parallel 
  applications in Sisal is equivalent to the cost of writing, debugging, and 
  maintaining sequential applications in FORTRAN.  Moreover, the same Sisal 
  program will run, without change, on any parallel machine supporting Sisal 
  software. Recent Sisal compiler developments for the Alliant FX/80, Cray X-MP,
  and other shared memory machines have resulted in Sisal applications that run 
  faster than FORTRAN equivalents compiled using automatic concurrentizing and 
  vectorizing tools.