[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference turris::languages

Title:Languages
Notice:Speaking In Tongues
Moderator:TLE::TOKLAS::FELDMAN
Created:Sat Jan 25 1986
Last Modified:Wed May 21 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:394
Total number of notes:2683

256.0. "MACRO vs VAXC" by IND::BOWERS (Count Zero Interrupt) Fri Nov 17 1989 10:01

    A customer commented the other day that he'd heard someone within
    Digital claim that VAXC compiler output was more efficient than "90% of
    our hand-written MACRO code".  
    
    Can anyone comment or (better yet) cite a study.  There are a lot of
    MACRO die-hards at the customer and my contact is trying to promote C
    as a substitute.
    
    -dave
T.RTitleUserPersonal
Name
DateLines
256.1MACRO is not cost effectiveLENO::GRIERmjg's holistic computing agencySun Nov 19 1989 17:0634
    
       I'm no C fan, but I'll stand by the rule of thumb, "if you were
    going to code it in MACRO, code it in C instead".
    
       C is (more) portable than any assembly language.  (Some would argue
    that that's all it is - a portable assembler replacement.)
    
       The bit about optimization is that in general, optimizers are given
    knowledge of the machines which people in general will not know -
    special code sequences which are faster, etc.  In addition, optimizers
    can do certain types of algorithmic optimizations (factoring of loop
    invariants, etc.) which straightforward MACRO will not gain for you.
    
       The biggest reason to move from MACRO is that it's not cost
    effective by any means, unless you have people who know the machines so
    well inside and out (and there are some!) and who carefully massage the
    algorithms before implementation.  But then you're paying someone a lot
    of money for something which can be (usually) much more efficiently
    done by an automata - a compiler.  Granted, there will ALWAYS be some
    optimizations which a person might recognize and implement which the
    compiler will miss, but the cost of having either a guru or someone
    grunting over code sequences when they could be coding in a higher
    level language.
    
       Now, if you want to talk about the best optimizing compilers on the
    VAX today, I'd forget MACRO and C and look at BLISS first, and then
    Pascal.  Tomorrow, GEM (which was mentioned in another note in this
    conference by Kent Glossop, so I'm assuming it's not too bad to mention
    it by name) will hopefully provide optimization at least equal to BLISS
    across all the DEC languages.  Letting you do the thing you really
    should anyways - pick the best language for a given piece of code.
    
    					-mjg
    
256.2initial efficiency is very misleadingCOOKIE::R_TAYLORRichard TaylorSun Nov 19 1989 19:1022
    Saying that a program in MACRO is more efficient than the program in C
    is like saying that it is more efficient to fell a tree using an axe
    rather than use a chain saw.  Sure it is more efficient, an axe is
    cheaper than a chain saw, and it does not require any fuel, but who
    does it?  You can take this analogy further by showing that an axe
    requires more manpower etc.
    
    The technical argument for using a high level language rather than
    MACRO is that a program in MACRO is less maintainable.   Because it is
    more difficult to understand, anyone adding or changing functionality
    tends to write new code rather than change code that already exists.   
    The initial program written in MACRO is more efficient than the same
    program written in C, however, after it has been changed a couple of
    times to correct bugs and add new features, the MACRO program is much
    more difficult to understand and probably less efficient than a C
    program to which the same changes have been made. 
    
    I do not know of any studies made in this area, but there is anacdotal
    evidence that this is true.  In Multics, a couple of modules were
    originally written in assembler because it was thought that they needed
    to be efficient.  Later they were recoded in PL/1.  The PL/1 modules
    were smaller and faster than their assembler counterparts. 
256.3SMOP::GLOSSOPKent GlossopSun Nov 19 1989 20:3686
There are a variety of reasons for coding in a HLL in preference in Macro
from a performance perspective:

    1. Standard optimization.  All of the optimizations that good compilers
       do can potentially improve a given critical section of code.  This
       includes all sorts of optimization including things like moving
       loop-invariant code out of loops, doing other types of code motion,
       detecting common sub-expressions, doing strength reductions,
       induction variable detection, loop unrolling, code scheduling, etc.

    2. Register allocation.  Macro code frequently starts out with relatively
       good (though typically less than optimal) register allocation (if
       the code sequence is non-trivial).  Over time, the original choice
       of register allocation may become worse and worse, but it rarely
       gets changed by maintainers.

    3. Macro vs. Micro optimization.  I've worked on far too much macro code
       (including VAX Macro) that was micro-optimized by not macro-optimized
       because Macro doesn't lend itself to thinking as much about a variety
       of possible approaches to a problem.  (I've seen bubble sorts in Macro
       that should have been coded as n log(n) sorts in a HLL.  The time
       gained by use of Macro was totally dwarfed by people looking at "the
       trees rather than the forest", and by having them spending their
       time working on getting the Macro version correct rather than
       spending their time on overall performance.)  My experience has been
       that writing everything in a HLL and using something like VAX PCA to
       determine where the bottlenecks are is a far more profitable use of
       time than attempting to write things in Macro and not having the
       time available to make changes like moving code between routines
       or re-organizing data structures for more efficient access.)

    4. Tradeoffs.  One of the things that makes the newest generation of
       compilers better is improved use of heuristics for areas such as
       relative profitability and register allocation.  For example, the
       GEM-based compilers will do things like allocate constants to
       registers on MIPS processors, provided that looks profitable for
       the code in question.

    5. Portability.  Long-term performance requires the ability to switch
       to new systems as technology allows.  For example, code written in
       HLLs 20 years ago runs MUCH faster today than the corresponding
       assembly code.  Take PDP-11s to VAXes.  Code written in PDP-11
       FORTRAN now runs blindingly fast on VAX 9000s (particularly if
       the code can be vectorized by the new VAX FORTRAN "High-performance
       option").  Code written in PDP-11 macro will at best have been
       partially hand-translated to VAX macro, or at worse, run a factor
       of n times slower using the PDP-11 emulator.

While compilers will never get 100% "perfect" code, as a general rule,
they will do as well along enough different axes at the same time to
make it much harder for assembly language programmers to have a reasonable
chance of beating compilers.  Since code changes over time and compilers
continue to improve, it is quite likely that HLL code will increase in
speed over time, while Macro code will degrade over time (with changes
and different processor models that deviate from the relative instruction
times for processors at the time the code was written, etc.)  This is
especially true of more modern architectures.  (Note that one way of
viewing compilers is an expert system at generating code and helping
people implement the things they need.  The improve over time along
all axes, including code generation, supplying probable error correction
in some cases, etc.)

Note that future VAXes are likely to extract a significant performance
penalty for coding in Macro (much like using the decimal instruction set
for machines without that feature caused a performance bottleneck for
some COBOL applications.)  It is definitely in the best interest of
customers to consider using HLLs for any code that is likely to be around
for a while unless there is some VERY strong reason why a HLL isn't
appropriate.  (Which, if the past is any indication, is basically
everything... :-) )

FWIW - If you think I'm just a compiler bigot - much of my early programming
work was on '8s, doing things like cramming plotter control code into 512
12-bit words of memory, and trying to optimize performance of various things.
Times have changed, and compilers have become FAR more sophisticated than they
once were, and future versions are likely to be more so.  (Coding in assembler
is increasingly like building your own car.  While once you could reasonably
hope to do things on your own, the economies of scale, etc., have yielded
mass-produced cars that tend to be extremely hard to beat as a package.)
Just one example - the code that GEM generates for many loops for MIPS is as
good as we know how to make it.  Basically, modern architectures are being
increasingly designed based on HLLs, rather than assembly coders, as the
design center.  Taking full advantage of these types of systems generally
requires a compiler.

Kent