[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference turris::languages

Title:Languages
Notice:Speaking In Tongues
Moderator:TLE::TOKLAS::FELDMAN
Created:Sat Jan 25 1986
Last Modified:Wed May 21 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:394
Total number of notes:2683

189.0. "Expression vs. statement languages" by MJG::GRIER (In search of a real name...) Sun Jun 19 1988 16:04

    
       What are general feelings for and against "expression languages"? 
    I'm pondering a personal experiment in language design, and I'm trying
    to decide if I'd like to explore the possibilities of an
    expression-based language (ala C, BLISS -- I like BLISS's model a
    little better...) or a statement-oriented language like Pascal. 
    If it makes a difference, I'm most interested in applying this in the
    general genre of object-oriented languages.
    
    
    					-mjg
    
T.RTitleUserPersonal
Name
DateLines
189.1Generality can breed simplicity, or inefficiencyTLE::AMARTINAlan H. MartinMon Jun 20 1988 00:2025
Lisp, APL, Algol-68, BCPL, and GNU C are also expression languages, to varying
degrees.  Points to consider:

Do you want to encourage the writer to assign or bind values frequently?  Or
does the problem domain encourage hierarchical application of operations upon
each other?

Are there primitive constructs of the language which can be designed to yield or
operate upon values profitably, where a statement language would express the
concept differently?  (For example, being able to construct a subset of a set,
and then having a construct to iterate over the subset value's elements, as
opposed to a few standard loop paradigms, each with separate parameters).

Are the data types large and best handled by indirection in the implementation,
or small and cheap to copy?

Do you want the language processor to make the program perform well by analyzing
its control and data flow from first principles, or do you want the user to help
the processor deduce value lifetimes by language syntax?

I think expression languages have an extra air of generality to their design.
Sometimes this means that implementation aspects take more energy, but sometimes
it should ease the process, when the big picture gets simpler.  And ought to
help redirect the programmer's mind from the language to the problem at hand.
				/AHM
189.2Rule of Thumb :-)TLE::JONANBe a Realist - Demand the ImpossibleWed Jun 22 1988 11:3711
    Simple rule #1: If the language is applicative (or *really* attempts
    to be) then it is (and should be) an expression language.  If not,
    it shouldn't be!
    
    Examples of former: LISP & APL

    Examples of latter, i.e., mistakes : BLISS & Algol-68.
    
    Just one person's opinion...
    
    /Jon
189.3What's "wrong" with BLISS?MJG::GRIERIn search of a real name...Wed Jun 22 1988 12:0715
    Re: .2:
    
       Could you please clarify?  I thought BLISS while not being what I'd
    consider a "high level language" (more along the lines of C, but a
    little better :-),) I was under the impression that it did quite a nice
    job with being an "expression" language.
    
       I'm not familliar with APL, so if you could give an example of what
    you mean, I'd appreciate it.
    
       Does anyone else have some feelings?  I thought that this would have
    sprung a debate far greater than any of the other "language wars".
    
    					-mjg
    
189.4Algol 68COMICS::DEMORGANRichard De Morgan, UK CSC/CSWed Jun 22 1988 13:0036
    I've written some 250,000 lines of Algol 68 and a similar system
    language (S3) in my time. You don't have to use it as an expression
    language, of course, but I found it often gave a better understanding
    of things in certain situations, e.g.
    
    	x := a[i +:= 1]
    
    here, i is incremented and then used as the subscript for a.
    
    In his Algol 68C implementation, Steve Bourne introduced a class
    of operators such as
    
        :=:= (pronounced "before becomes"),
    
        a :=:= b := c
    
    delivers the value of c to b, but the a priori value of b to a.
    Thus
    
        a :=:= b := a
    
    interchanges the values of a and b (this implemented very neatly
    on the DECsystem 10, which has an exchange instruction).
    
    Algol 68 has a lot more features, such as user-defined operators
    and a very comprehensive i/o (called transput) system. It's a pity
    it has very crude parallel facilities. Another feature I don't like
    is structure selection, it used
    
        b OF a
    
    where most other languages use
    
        a.b
    
    I personally think this is the wrong way round.
189.5Race Turtles Race...ATLAST::WELTONI broke up Springstein's MarriageThu Jun 23 1988 10:3013
    What's the REAL difference between Pascal and C, except a like
    "syntactic sugar".  From my viewpoint, the are both imperative (yes,
    another term) languages.  The programmer spells out the steps to
    accomplish a certain algorithm.  Every step does not produce a RESULT,
    like in applicative languages (are applicative and expression
    equivalent?) like LISP. The reason LISP works so well as a base
    for the Object paradim, is the fact that every "statement" performs
    a function on some data object.
    
    my too sense....
    
    douglas
    
189.6expression /= functionalMJG::GRIERIn search of a real name...Thu Jun 23 1988 13:5934
    
       Re: .5:
    
       I don't think applicative and expression are quite equivalent. 
    Within a single "statement", I would possibly equate applicativity and
    expressionivity, but when looking in the "larger scale", all applicative
    languages are expression languages, BUT not all expression languages
    are applicative.  Especially C.
    
       But what about things which can't be accomplished with a function
    language?  (I believe that's what you mean by "applicative", but that's
    the terminology I know.)  Some of the nature of computing is iterative
    and sequential in nature.  But that deserves its own topic at some
    point.
    
       What are the pros/cons of expression vs. statement languages in
    terms of produced code efficiency?  Is it possible to produce better
    code from things like:
    
    	a[i] := 5;
        j := i;
        i := i + 1;
    
       or from
    
       a[j=i++]=5;
    
       For a stupid code generator, the second will quickly produce better
    code, but with a smart code generation system (VCG comments anyone?)
    I'm curious about which is potentially more optimizable.
    
    
    					-mjg
    
189.7I have a LISP when I speakATLAST::WELTONI broke up Springstein's MarriageThu Jun 23 1988 17:4123
>    < Note 189.6 by MJG::GRIER "In search of a real name..." >
                         -< expression /= functional >-

>       But what about things which can't be accomplished with a function
>    language?  (I believe that's what you mean by "applicative", but that's
>    the terminology I know.)  Some of the nature of computing is iterative
>    and sequential in nature.  But that deserves its own topic at some
>    point.
    
 
    What can't be done in an applicative language?
                                     
    Sequence is available by default.  Conditional functions are available
    to make N-ary branching decisions.  And recursion (theoretically
    more powerful than iteration) is there when a function needs to
    be repeated zero or more times.
    
    What else do you need?  ( I know ... I'm asking for it )
    
    douglas
    
    P.S.  ( = Applicative Functional ) => t
    
189.8I'll start a separate topicMJG::GRIERIn search of a real name...Thu Jun 23 1988 20:2519
    
    Re: .7:
    
       One of the design concerns of a language has to be usability.  In
    informal discussions with various people about generally this topic,
    the concern was raised that languages which fall into the applicative
    category (LISP was mentioned in particular,) can be a pain, especially
    for someone used to "traditional" languages like Pascal/C/FORTRAN.
    
       My knowledge of LISP is rather limited, and I just found out today
    that there's a "prog" operator which evaluates each element in the
    remaining list in order (and returns the value of the last?).
    
       I think I will begin another topic which will discuss the relative
    merits of procedural vs. functional languages so that we don't stray
    from the "expression" vs. "statement" topic.
    
    					-mjg
    
189.9:=:=; C is not an expression language; Truths that hurtDENTON::AMARTINAlan H. MartinFri Jun 24 1988 11:4732
Re .4:

I'm suspicious of :=:= - an operator which affects the semantics of its operands
seems like an extremely bad idea.  How does it compare in precedence to other
operators?  Are there restrictions on the kind of expressions it can accept as
its right hand operand (only lvalues, for instance?)

b OF a was a definite mistake.  If all the primary unary operators were postfix,
then you wouldn't need to use parens to disambiguate (b OF a)[i] from b OF a[i].
C got this wrong too, with prefix pointer dereferencing (*).

Re .6:

I don't consider C to be an expression language.  You can't embed declarations
or loops within expressions.  And even Algol-60 lets you use the same
IF/THEN/ELSE syntax as a statement or an expression, while C uses different
syntax - "if(a) b else c" vs "a ? b : c".  A real expression language would
not encourage such a distinction.

The mere fact that assignment is a valid expression in C is not much more
impressive than (some?) BASICs':

10 LET A = B = C = 0

Whoop-de-do.

Re .8:

Are languages that encourage the use of structured control flow constructs a
pain for people that are trained to rely on GOTOs in BASIC or FORTRAN?  Is this
a problem with structured languages?
				/AHM/THX
189.10talkin' about a revolution...MJG::GRIERIn search of a real name...Sun Jun 26 1988 13:3529
    re: .9:
    
       I agree with your point about C, BUT, in comparison to the other
    "popular" languages of today, i.e. Pascal, Ada, BASIC, C is considered
    an expression language.  Regardless of whether it technically is or
    not, it is regarded as such.  I think it was a mistake not to make it
    completely expression-oriented, but then again I have a lot of other
    opinions about C, that it was a mistake period.  I'll try not to flame
    about it tho'.
    
       For people who learned to program in BASIC and FORTRAN, and never
    got out of it, yes, structured concepts are a pain.  It's because to a
    hacker (I use that to mean a programmer who is not an academic in this
    context) the BASIC program worked fine, and the structued stuff is just
    B.S.  Even though I did learn to program some 13 years ago in BASIC and
    FORTRAN, I don't think it's hindered me because I am an academic --
    perhaps the worst kind, a mathematician.  The structured-ness and
    abstractionisms appeal to me because of this.
    
       What I'm trying to do here is to try to come up with revolutionary
    ideas about language design and concepts, while not making the same
    mistakes which have been made in the past (BLISS -- worries too much
    about the implementation, and the "." dereferencing operator, C --
    generally too cryptic and promotes bad programming, Pascal -- not
    generally "flexible" enough, Ada -- not-quite-object-oriented and loses
    one of Pascal's best concepts, the compound statement, etc.)
    
    					-mjg
    
189.11TLE::JONANBe a Realist - Demand the ImpossibleSun Jun 26 1988 13:4225
>    What can't be done in an applicative language?
    
    Nothing.

>    And recursion (theoretically more powerful than iteration)
    
    Recursion and iteration have *equivalent* expressive power (each
    can be "derived" from the other).

>    the concern was raised that languages which fall into the applicative
>    category (LISP was mentioned in particular,) can be a pain, especially
    
    Oh no!  Not this myth again!
    
    
>    I'm suspicious of :=:= - an operator which affects the semantics of its
>    operands seems like an extremely bad idea.
    
    I agree - sounds like a vote for applicative languages to me!  After
    all, your typical "function" in an imperative language not only
    affects the semantics of its operands but of various "globals" as
    well...  ~/~ :-)
    
    
    /Jon
189.12yes...MJG::GRIERIn search of a real name...Sun Jun 26 1988 20:2058
    re: .11:
    
       Regardless of whether or not this is a "myth", the perception IS
    there.  People do resist using languages which may truthfully be better
    and more intuitive to a beginner strictly because of how they were
    "brought up" programming.  Someone who's used BASIC or some assembly or
    other for years just doesn't see the point even in the small jumps to
    "structured languages" like Pascal, C or Ada.  (Case in point : my
    brother, who's an enginner at Sanders associates.  He's a MACRO freak
    and used to have a hard time being convinced of the relative merits of
    Pascal and such.  He might have changed his opinions within the last
    year or so, but still, I don't think he's a minority of one!)
    
       Re: functions actually having more "global" effects:
    
       This is something I've pondered quite a bit lately.  Basically the
    summation of my ideas is that there should be more differentiation
    between types of subprograms.  Perhaps :
    
       procedure -- Pascal's definition of a procedure
    
       valued procedure -- corresponds to a VAX Ada concept for foreign
    	  subprograms
    
       procedural function -- Ada's concept of a function.  Can have
    side-effects through accessing objects in a larger scope, or by passing
    a pointer/access-type to it.
    
       "real" function (or just function) -- function without side-effects.
    Optimizations like f(x)+f(x) --> 2*f(x) may be made.
    
       Has this been explored in other languages before?  Another
    assumption made here which could help future compilers is that a
    function (the "real" sort) can be parallelized or pre-computed.  I.e.
    if you had...
    
       X := F(A) + F(B);
    
       If you had cheap enough processes, you COULD simultaneously compute
    F(A) and F(B).
    
       For the mathematically oriented, given this, you could put
    qualifications on the functions to allow more optimizations.  Such as
    "homomorphic" to indicate that the operation can be brought inside the
    function (loosely speaking.)  So if you had a function which mapped a
    real number onto its complex counterpart, and you had the expression...
    
       C := Complex(5) + Complex(7);
    
       It could be optimized to "C := Complex(12);" because the Complex
    function is homomorphic.
    
       I wonder if I should talk about this, it could make a good thesis at
    some point...  :-)
    
    
    					-mjg
    
189.13When I said "affected operand semantics", I meant itTLE::AMARTINAlan H. MartinMon Jun 27 1988 00:1326
Re .11:

When I said that A :=:= B := C seems to affect the semantics of its operand(s),
I meant that it reached down into the subexpression B := C, and grabbed a hold
of the value of B before allowing the assignment B := C to be evaluated.
If this is the implementation, I find it to be quite unusual.  The closest
thing I can think of off of the top of my head is C's sizeof operator.  The
effect it has upon its operand is that it is not evaluated at all.  I'm not
sure you're talking about the same thing.  Can you give an example of a "typical
``function'' in an imperative language which affects the semantics of its
operands"?

Re .12:

I think you should look through more of the literature on programming languages.
SIGPLAN Notices is particularly accessible, partially because it isn't refereed.
A good survey textbook is Ledgard's.


BTW, in 1979, Ada distinguished between valued procedures, which could have
side-effects, and functions, which couldn't (and would therefore compute the
same results with the same inputs).  It was discarded from the language before
it was standardized.  One reason was that such functions couldn't be debugged
by having them print intermediate values, because I/O operations are
side-effects.
				/AHM/THX
189.14TLE::JONANBe a Realist - Demand the ImpossibleTue Jun 28 1988 20:2932
    Re: Functions affording various optimizations and ||-ism benefits
    
    Yes, this has been explored.  One of the reasons why applicative
    languages have a distinct advantage in these areas.
    
    However

>    For the mathematically oriented, given this, you could put
>    qualifications on the functions to allow more optimizations.  Such as
>    "homomorphic" to indicate that the operation can be brought inside the
    
    this is a very interesting idea which I haven't really seen addressed
    before (which in NO WAY implies it hasn't! :-)).  There are a number
    of interesting things that come to mind about how to best address
    and use such an aspect - not just in terms of optimizations either.
    As for a thesis, you may have already let the cat out of the bag! ;^)
    

    Re: .13
    
> When I said that A :=:= B := C seems to affect the semantics ...
    
    Yes, I see what you're saying, but to me such a distinction is more
    one of quantity and not quality.  Also, this seems to be rather
    like A = C++ (assuming this is legal C), where A would get the
    value of C and then C would be post-incremented (or does = bind
    stronger than ++.  In fact, isn't this all a precedence issue?
    :=:= binds stronger than :=.  Bizarre, but then the whole thing seems
    crazy to me...)
    
    /Jon
189.15Clint Said: Go ahead, change my semantic...ATLAST::WELTONNancy Reagan told me to say itWed Jun 29 1988 16:0621
    
    Re: .-1
    
    >	In fact, isn't this all a precedence issue? :=:= binds stronger 
    	than :=. 
    
    I vote yes for precedence.  If it was a semantic issue wouldn't
    it effect the next use of ":="?
    
    Re: .11
    
    I don't think changing the semantic of a function is all that bad,
    as long as it's done in a static manner (e.g., Ada Packages)
    
    Re: 12
    
    This "homomorphism" is that what the Ada package and the Smalltalk
    class are best at?
    
    douglas
    
189.16Homomorphisms -- slight digressionMJG::GRIERIn search of a real name...Wed Jun 29 1988 17:0862
    
    Re: homomorphism:
    
       I don't know of any language which implements the concepts of
    homomorphic functions... yet.  (Like I said, I may need some thesis
    material in a while.)
    
       To digress from strict language theory for a moment, homomorphic is
    an adjective used generally by mathematicians (yup, that's me!) in
    regards to functions.  A function is said to be homomorphic between one
    group and another if the function allows the operator to be taken
    inside/outside of the function (very very very loosely speaking.)
    
       For instance, say you had this Pascal definition...
    
       type
          complex_number =
             record
                real_part, imaginary_part : real
             end;
    
        and then the function...
    
       function make_complex (x : real) : complex_number;
          var
             temp : complex_number;
          begin
             temp.real_part := x;
             temp.imaginary_part := 0.0;
             make_complex := temp
          end;
    
       this function make_complex is homomorphic from (real,+) to
    (complex_number, +), because you can always say for any x and y being
    of type real:
    
          make_complex(x real.+ y) = make_complex(x) complex.+ make_complex(y)
    
    (I stuck tags on the "+"s to make sure that you realize the operators
    can be different.  You can establish homomorphisms between sets and
    
    various operations.  For instance, if it's of interest, there's a
    homomorphism from the real numbers with addition to the points on the
    circle which are complex numbers whose absolute value is one, under
    multiplication.  So, if you wanted to multiply two points but didn't
    want to go through the pain and anguish for some reason, you could pull
    them back through the function (it's a "real" function and thus doesn't
    have side effects - but doesn't necessarily have an inverse...) and add
    the numbers, and then shove them back through the function to get the
    new point on the circle.  think of polar co-ordinates, with the radius
    being set to one, and the real numbers representing the angle -- theta)
    
       I don't know how much and when this might be useful, but beings the
    greatest needs for performance are in areas of pretty much raw number
    crunching, if you knew that instead of doing 4 floating multiplies and
    then 2 floating adds, you could just do a floating add, well, you just
    saved what I imagine to be an extremely expensive operation or two!
    
    					-mjg
    
    
    
189.17Am I homomorphic yet?ATLAST::WELTONNancy Reagan told me to say itWed Jun 29 1988 19:0716
         Is the following quasi-LISP Add function Homomorphic?

         (Defun Add ( X Y )
         	( Cond
         		( ( Integerp X ) ( Add_Integer X Y ) )
         		( ( Floatp X )   ( Add_Float X Y ) )
         		( ( Complexp X ) ( Add_Complex X Y ) )
         		( ( Imaginep X ) ( Add_Imagine X Y ) )
         	)
         )

         douglas
    
         p.s.: I know i'm leaving out lots of details about the Add_*
         functions, but I just want to know if I undestand the concept.
         
189.18Handwaving discussion...TLE::JONANBe a Realist - Demand the ImpossibleWed Jun 29 1988 20:1340
    Re: .17

>         Is the following quasi-LISP Add function Homomorphic?
    
    Who knows!  Two points:
    
    1. Homomorphisms (homeomorphisms, isomorphisms, etc) are functions
       mapping some "space" (say some n-dimensional vector space or some group
       foo) to some other "space" (say some other m-dimensional vector
       space or some group bar (or subspace thereof...)) in such a way
       that characteristic operations defining the properties of the spaces
       are preserved (effectively mapped by the function also).

       Say we have two spaces M and N (forget about what kind at the
       moment).  Say that M is closed under "+" and N is closed under
       "*" and that these define the salient characteristics of M and
       N.  If we conjur a function F:M -> N such that F(m1+m2) [a thing
       that lives in N] = F(m1) * F(m2) [another thing in N] then
       (*** WARNING *** hand waving ahead!) we may say that F is a
       homomorphism from M to N (if it happened to also be continuous we
       would say it was a homeomorphism [more handwaving but sort of...]).

    2. A function which is a defined in terms of other functions may
       be a homomorphism only if its defining functions are.

    So, 1) what are the spaces that Add maps and 2) how do the component
    functions behave on these spaces??  I would suggest you pick up
    any decent fairly rigorous linear algebra text.  It will cover the
    high points (group theory texts may be toooo abstract for the
    "beginner").


    Re: .16
>    an adjective used generally by mathematicians (yup, that's me!) in

    Not another one of us gadflys in the notes ether!  Oh nooooooo.
    :-)


    /Jon
189.19Think of type-casting rather than the operatorMJG::GRIERIn search of a real name...Wed Jun 29 1988 20:29108
       Nope.  I'll explain is out a little more formally, using CS jargon
    instead of mathematical.  I'll try to be somewhat object-oriented in
    approach, also.
    
       Say you have a data type "A", with a binary operation "#", which
    takes two operands of type A and returns a value of type A.  Assume
    that (A, #) follows your "normal" algrbraic rule of associativity...
    
    	(x#y)#z = x#(y#z)
    
       and is generally nice, because there's a value, e, in A for which
    the following happens:
    
        x#e = x, for any x in A
    
       (If # was "*", we'd be talking about 1, if it was "+", we'd be
    talking about 0.)
    
       Additionally, A has inverses, so that for any x value in A, you
    can find a y value where
    
        x#y = e
    
       (for "+", the usual inverse is "-x", for "*", if you leave out zero
    as a possiblility, 1/x is good.  If you leave zero in, it's not good
    (mathematically speaking, it's only a semi-group.))
    
       Anyways, these "trivialities" are needed because behind the user's
    back we're going to play some algebraic games.  (If you were a
    mathematician at this point, you'd call (A,#) a "group".  Because of
    the problem with 0 under multiplication, (R, *) is a "semi-group",
    where R is the reals.  Taking out the zero leaves a group.  If you
    could also ALWAYS say that x#y=y#x, you'd have a "commutative group" or
    "Abelian group".  Most of the stuff we work with day-to-day is
    commutative, but if you're into matrix operations, you'll know well
    that a matrix A times a matrix B is not necessarily equal to B*A.)
    
       Say you've got this other data type "B" which has the same rules,
    but with the operator "~".
    
       So, a homomorphic function, f, from (A, #) to (B, ~) takes a SINGLE
    value of A and puts it into B, and has the condition that for every
    possible x and y values in A, you get...
    
       f(x#y) = f(x) ~ f(y)
    
       What you had there took two As and gave you a single B.  Not a
    homomorphism.  Where you could use it, and I imagine compilers always
    do without telling everyone is if you do something like:
    
       var
          x : real;
          a,b : integer;
       begin
          readln (a,b);
          x := a+b;
    
       Now, any self-respecting compiler would do that addition of a and b
    using integers, and then convert it to a floating point number.  BUT,
    it could convert a and b to reals first and then add them if it wanted.
    Why?  Because there is a homomorphism from the integers with addition
    to the reals with addition, where f(x) = x.  This is a counter example,
    because the compiler would "naturally" perform the simpler operation. 
    However for functions involving a good deal of complex manipulation, it
    would be easier to carry out the addition in terms of reals rather than
    the complexes.
    
       I can't come up with non-mathematical examples for CS off the top of
    my head, but I can envision that you might be able to use this for some
    sort of object-oriented approach, where a cost analysis is done by the
    compiler which says that it'd be quicker to convert datatype A to B,
    perform the operation in B, and then take it back to A.  Especially
    when things like truly distributed processing start taking place where
    you might have to export out an object to a processor somewhere in the
    network which knows all about how to manipulate that kind of object. 
    It might be cheaper to use the existance of an "isomorphism" between
    two object-classes to convert, do the operation either locally or
    through a processor which is deemed "less expensive" and then convert
    it back.
    
       Something to keep in mind is where you're going from and to.  You
    wouldn't want to convert reals to integers, do the arith. there and
    pull it back unless you didn't care a whole lot about accuracy.
    
    To complete the definitions, an isomorphism is a homomorphism where
    there is a two-way 1-1 correspondence.  So, like my example of doing
    the arithmetic in the reals before bothering with the complexes is NOT
    an isomorphism because not every complex number has a unique real
    number to which it is "equivalent".  But if you've ever found out much
    about the exponential function (e to the x) when dealing with
    complexes, you might know that it forms an isomorphism from the entire
    complex plane (except for (0,0) -- pesky zeroes!!!) to a 2*pi wide
    horizontal strip running across the plane.  The uses are endless
    (remember that exp(x*x) = exp(x)+exp(x)?  Look familliar to stuff I was
    talking about above?)
    
       So, wrapping all that back up into a few words, a homomorphism is a
    function from a certain data type A to some sub-set of another, B,
    which makes them more-or-less algebraically equivalent.  If the
    homomorphism is an isomorphism, it means that they are exactly
    equivalent, and performing the first operation on two elements of the
    first datatype is exactly equivalent to performing the second operation
    on the corresponding two elements of the other data type.
    
       Woah, time to go.
    
    					-mjg
    
189.20How about concatenating strings?TLE::FINLAYSONThu Jun 30 1988 10:0115
    Would this be a more CS-like example?

    Say you had a function LENGTH that returned the length of a string, and
    you had a string concatenation operator &.  Then LENGTH would be
    homomorphic if you could say

    LENGTH(a & b) = LENGTH(a) + LENGTH(b)

    Generally, that would be a nice optimization since it avoids a
    relatively expensive string concatenation.

    Mark Finlayson

    
189.21TLE::JONANBe a Realist - Demand the ImpossibleThu Jun 30 1988 20:1474
    

    The problem of your function being of two variables is trivial to fix -
    just put the two together in a single list.  One always goes around making
    homomorphic (homeomorphic) maps from R^n to R^m using this sort of device.
    Things in R^n are n-tupples; those in R^m, m-tupples and are typically
    called vectors (in the vector space sense).  In fact, it is even
    possible to work on infinite dimensional spaces in this way.  The
    "real" problem is that you need to define your spaces - at least up to
    the structure your interested in.  Generally this means defining the set
    of objects and the salient operations on them that your interested in.

    In the case of Add a typical scenario might be:

    v is in R� if v = (v1, v2), where v1 & v2 are in R (the reals)
    Define the structure (R�, +') where for any v and w in R�,
        v +' w = (v1+w1, v2+w2), where + is "regular" addition in R.
    So, +':R� X R� -> R� and +: R X R -> R.

    Asside: As it happens, (R�, +') is an abelian group

    Define Add: R� -> R by Add(v) = v1 + v2 (+ being as before)
    Then Add is a homomorphism.
    Proof: Let v and w be in R�.  Then
           Add(v +' w) = Add( (v1+w1, v2+w2) ) = (v1+w1) + (v2+w2) =
               (v1+v2) + (w1+w2) = Add(v) + Add(w).

    Note the use made of the commutative and associative properties of + in
    R!!!  This is a form of inheritance.

    Asside: If we were to toss in *scalar* multiplication of vectors in R�
    by elements from (its base field) R [r*'v = (r*v1, r*v2)], then we would
    find that Add(r*'v +' w) = r*Add(v) + Add(w).  Such a homomorphism is
    given a special name: linear transformation.


>    (remember that exp(x*x) = exp(x)+exp(x)?  Look familliar to stuff I was

    The more familiar traditional example would involve the inverse operations
    of this sort: logarithms.
    log(a*b) = log(a) + log(b), log(a^b) = log(a) * log(b), etc.
    When you had to crank things through by *hand*, the real advantage of this
    was clear!  (This is still very useful in various symbolic calculations -
    e.g. "logarithmic differentiation").


>    -< How about concatenating strings? >-
>    Would this be a more CS-like example?

    Yes.  Here S is the set of one dimensional strings of characters from
    some base alphabet (could be anything) and

    &: S X S -> S by juxtaposition (note & is not commutative, is associative,
                  has a right and left identity (the nullstring), but
                  generally there are no inverses [=> (S, &) is NOT a group])
    LENGTH: S -> R by LENGTH(s) = count of characters in s.


=====

    In the (complete) abstract the essential nature of a homomorphism is
    captured by the following definition:

    Let A & B be some sets (anything you want) and R some relation on A (not
    even necessarily a function) and S some relation on B (again, no
    restriction).  If f is a function f: A -> B such that for all a1,...,an
    in A, (a1,...,an) IN R => (f(a1),...f(an)) IN S, then f is a *homomorphism*.
    If f is 1-1 and onto, then f is an *isomorphism* and (A,R) and (B,S) are
    said to be *isomorphic* structures.
    
    The above discussions are just special cases of this.
    

    /Jon
189.22More generality -- I like it!MJG::GRIERIn search of a real name...Sun Jul 03 1988 17:2210
    RE: .21:
    
       I suppose this should be in CLT::MATH, but thanks -- I've only seen
    homomorphisms defined on groups/rings/integral domains/fields, and
    surmised that a linear operator is a homomorphism, but I never made the
    jump to a set with an arbitrary relation.  Interesting.  (Goes to show
    how much I have yet to learn still!!)
    
    					-mjg
    
189.23Correction to .4COMICS::DEMORGANRichard De Morgan, UK CSCWed Jul 06 1988 13:2320
    I'm afraid I blundered in .4 and wrote the :=:= operator in the
    wrong places. The example should have been
    
    	a := b :=:= c
    
    and
    
    	a := b :=:= a
    
    Now this does not have the nasty effects AHM pointed out in .9
    
    The operator can be formally defined as, e.g.
    
    	OP :=:= = (REF INT a, INT b): (
    	    INT temp;
    	    temp := a;
    	    a := b;
    	    temp)
    
    Bourne also defined +:=:= etc
189.24Yes, that is much more reasonableTLE::AMARTINAlan H. MartinSun Jul 10 1988 19:4019
Re .23:

Ah, thank you very much for the clarification, Richard.  There were two nary
functions like this in SIGPLAN Notices a while ago, named SHIFT and ROT (I
think).

SHIFT(a,b,c,...,z) would translate to:

	z :=:= ... :=:= c :=:= b :=:= a;

		-and-

ROT(a,b,c,...,z) would translate to:

	a :=:= z :=:= ... :=:= c :=:= b :=:= a;

It was mainly motivated by the observation that many operations on linked lists
could be expressed as double or triple pointer rotations and shifts.
				/AHM/THX
189.25Multiple assignments in CPLMOIRA::FAIMANA goblet, a goblet, yea, even a hoopMon Jul 11 1988 11:277
    CPL allowed "simultaneous" assignments, with a syntax something like
    "A, B, C = D, E, F".  All RHS expressions were evaluated before any
    LHS variables were assigned to.  Thus, SHIFT(A,B,C) could be coded
    as "C, B = B, A" (or as "B, C = A, B") while ROT(A,B,C) would become
    "B, C, A = A, B, C" (or "A, B, C = C, A, B").
    
    	-Neil 
189.26CLT::GILBERTFri Jul 15 1988 17:109
Re .24:
	I remember that article, and have used the SLIDE and ROTATE operators
	in two forms -- assembly language routines and Bliss macroes.

	The biggest advantage I found was that the discipline avoided
	creating extra references to objects (which is important with
	singly-linked lists, and trees).  This made code for manipulating
	trees and linked lists much simpler (almost idiot-prrof) -- the 
	code tended to work the first time.
189.27geez, no Lisp hackers here?MOSAIC::PRAETORIUSR4T P8SThu Jul 20 1989 11:316
Re last few:

     Common Lisp implements shiftf and rotatef, which takes a list of SETFable
places (the simplest case of which is a variable name, many other places
correspond to array or structures references in more conventional languages)
and slides or rotate the values.
189.2816BITS::PUDERKarl Puder, VAX APL Project LeaderMon Aug 28 1989 20:506
    VAX APL can do it too, as in:
    
    SHIFT(A,B,C)	(C B) _ B A
    ROT(A,B,C)		(A B C) _ C A B
    
    (where "_" is APL's left arrow character).