[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference rusure::math

Title:Mathematics at DEC
Moderator:RUSURE::EDP
Created:Mon Feb 03 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2083
Total number of notes:14613

309.0. "Computer Arithmetic Conf." by TOOLS::STAN () Wed Jun 19 1985 18:30

From:	GAUSS::ELEBLANC     "Emile LeBlanc, DTN:225-5918, MS:HLO2-3/N04" 19-JUN-1985 16:45
To:	HARE::RABINOWITZ ! sent to @SCI,ELEBLANC    
Subj:	7th Symposium on Computer Arithmetic


    +---+---+---+---+---+---+---+
    |   |   |   |   |   |   |   |
    | d | i | g | i | t | a | l |   I N T E R O F F I C E   M E M O
    |   |   |   |   |   |   |   |
    +---+---+---+---+---+---+---+
                                       From: Emile LeBlanc
    To: Eastern Research Lab           Date: 13 June 1985
        @SCI                           Dept: Eastern Research Lab
                                       DTN:  225-5918
                                       Loc:  HLO2-3/N04
                                       Enet: GAUSS::ELEBLANC

    Subject:   ARITH-7: 7th Symposium on Computer Arithmetic
               June 4-6, 1985 at Jumer's Castle Lodge in Urbana, IL

    Reference:  "Proceedings: 7th Symposium on Computer Arithmetic",
                Kai Hwang, Editor


         The conference was sponsored by  the  IEEE  Computer  Society
    Technical  Committee  on Computer Architecture in cooperation with
    the University of Illinois at Urbana.  There were eleven  sessions
    covering  a wide range of topics dealing with current studies into
    both the software and the hardware aspects of computer arithmetic.
    Some  discussions  concerning  the future of the field of computer
    arithmetic were also presented.

         Listed below are the titles of each of the sessions  and  the
    chairs  for the sessions.  After the titles are brief descriptions
    of the reports presented in  the  session,  along  with  speaker's
    opinions or comments on selected reports.


    Session 1 - Efficient Adders and ALU Designs
                (Chair: Daniel Atkins, U of Michigan)

         There were presentations  by  Barnes  (IBM  Yorktown),  Irwin
    (Penn  State),  Barlow  (Penn  State) and Ciminira (Politecnico di
    Torino) on hardware descriptions of some  new  methods  of  adding
    numbers.   Kobayashi  (U  of  South  Carolina)  presented  only an
    algorithm for the parallel summation of two's complement numbers.


    Session 2 - Fast Multipliers and Dividers
                (Chair: Earl Swartzlander, Jr., TRW Redondo Beach)

         Some new hardware algorithms for multiplication and  division
    were  presented  by Torng (Cornell), Cardin (Concordia), Ercegovac
    (UCLA), Dadda (Politecnico di Milano) and  Taylor  (UC  Berkeley).
    The  Torng  report  discussed  a  design  for  fast scalar product
    hardware by utilizing an algorithm that minimizes the alignment of
    the  summands.   Some  of  the multiplication people expressed the
    opinion (somewhat jokingly) that division was a waste of time  and
    the division people expressed the obvious opposite opinion.
                                                                Page 2


    Session 3 - Floating-Point Arithmetic
                (Chair: William Kahan, UC Berkeley)

         The design of floating point processors on a  Risc  (software
    implementation  on  the  Stanford MIPS machine) by Gross (CMU), in
    VLSI (claimed to run at 10 megaflops in a private  discussion)  by
    Fandrianto  (Weitek) and in CMOS chips (implementation of the IEEE
    754  standard)  by  Eldon  (TRW  La  Jolla)  was  discussed.    An
    architecture  and some simulation studies for unary functions (for
    example, square root, reciprocal, and reciprocal square root)  was
    presented  by  Schneider  (Lockheed).   The  method  used  for the
    simulation could have been improved in some  simple  ways  to  get
    more information.  The authors used the IEEE standard for floating
    point in their studies.  The simulation indicates that  relatively
    high   speed   calculations   are   possible  using  off-the-shelf
    components.   An   axiomatic   description   of   floating   point
    arithmetics  was  presented by Zadrozny (North Texas State).  This
    has essentially been done before to the same degree (according  to
    comments from the audience) and Zadrozny did not give any explicit
    mention of where his treatment  was  superior  to  previous  work.
    Part  of the description of a general floating point arithmetic in
    the scheme presented (the hardest, perhaps impossible part of  the
    work yet to be done) is a level describing 'how the operations are
    performed',  and  it  appears  that  no  new  useful   method   of
    categorizing arithmetics at this level was given.


    Session 4 - Systolic Arithmetic Schemas
                (Chair: Daniel Gajski, U of Illinois, Urbana)
                (Gajski was actually not present at the Symposium)

         Systolic algorithms were presented for polynomial  evaluation
    and  matrix  multiplication  by  Schaeffer (U of Alberta), and for
    integer GCD calculation (an optimized algorithm by Brent and Kung)
    by  Kung.   An  algorithm for partitioning a recursive computation
    for a fixed size VLSI architecture was given  by  Cheng  (Purdue).
    This algorithm was applied to the calculation of a scalar product.
    Wafer scale integration for the DFT (Discrete  Fourier  Transform)
    on   an   array  processor  using  the  PFA  (Prime  Factorization
    Algorithm) was presented by Moldovan (USC).


    Session 5 - Directions in Computer Arithmetic (Panel Discussion)
         (Moderator: Kai Hwang, USC)

         The panel  members  were  Ercegovac  (UCLA),  Kulisch  (U  of
    Karlsruhe),  Robertson  (U  of  Illinois,  Urbana),  Matula (SMU),
    Swartzlander  (TRW  Defense  Systems  Group),   Aiso   (Keio   U),
    Fandrianto  (Weitek)  and  Dadda  (Politecnico di Milano).  Future
    directions in computer arithmetic and the  fate  of  future  Arith
    symposia  was discussed.  The members presented their opinions and
    then questions from the audience  were  fielded.   Ercegovac  felt
    that  technology  was  moving  so  fast  that it was difficult for
    floating point arithmetic  to  keep  up,  and  that  teaching  and
    getting  new  students  interested  in  the  field  was  even more
                                                                Page 3


    important now.  Kulisch said he felt that the ACRITH approach  was
    the  proper  direction  in which to move in the future.  Suggested
    extensions to ACRITH included new  language  keywords  to  specify
    when  maximum accuracy was needed in a program, and the ability to
    have standard Fortran code execute with maximal  accuracy.   There
    was  no  discussion  of  the  difficulties  of  implementing  such
    features on top of the Kulisch-Miranker methodology that is at the
    heart of ACRITH.  Kulisch mentioned he felt that the IEEE standard
    was  an  outdated  idea.   Matula  said  he  felt  that   a   good
    communication  had  started  between  the high level users and the
    hardware designers.  This should be maintained for the benefit  of
    the  user.   He  suggested  that  perhaps a new system of computer
    arithmetic  (a  rational   or   logarithmic   system)   might   be
    appropriate.   Some  system  with  more  predictable behavior (for
    example, in floating point 1, 2, 3, ...  are all exact but most of
    1/1,  1/2,  1/3,  ...  are inexact) could be more useful.  Perhaps
    the integer GCD should be a primitive operation.   The  industrial
    input was primarily that speed was the most important feature that
    industry desires, with reasonable accuracy.  There seems to be the
    view  among  some customers that the present accuracy is adequate,
    and additional accuracy at the price of performance would  not  be
    acceptable  or  necessary.   It may just be a matter of time until
    the users find themselves requiring more accurate results, and  by
    then  they  will  probably still need speed too.  Another question
    was how complicated the hardware should  be  and  how  the  newest
    technology  impacts  on our views of computer arithmetic.  How the
    latest  advances   in   symbolic   manipulation   and   artificial
    intelligence  change  how  arithmetic should be performed was also
    mentioned.


    Session 6 - Elementary Function Evaluation
                (Chair: Mary J. Irwin, Penn State U)

         This  session  dealt   with   hardware   considerations   for
    evaluation  of elementary functions.  A paper by Naseem and Fisher
    (Michigan State) on modifications to  the  CORDIC  algorithm  (for
    elementary  functions  evaluation)  that  is  faster  and  can  be
    pipelined and parallelized was canceled since the authors  weren't
    present.  This sounds like an important and interesting result.  A
    possible three dimensional VLSI hardware polynomial transformer (a
    method  of translating 'n' Boolean functions of 'm' variables to a
    unique single variable polynomial via  Galois  field  theory)  was
    presented  by  Asio (Keio U).  A VLSI square root program that can
    be accurate to the last  bit  was  implemented  and  described  by
    Bannur  (National),  and a pipeline architecture for computing the
    cumulative  hypergeometric  distribution  using  a  regular   VLSI
    implementation was described (but not implemented) by Ni (Michigan
    State).  Finally Dadda (Politecnico di Milano) presented a  method
    of squaring a binary number quickly in hardware.


    Session 7 - Rational and Residue Arithmetic
                (Chair: Fred J. Taylor, U of Florida, Gainsville)
                                                                Page 4


         Hardware  residue  arithmetic  using  read  only  associative
    memory  was presented by Papachristou (Case Western).  Zhang (SMU)
    also described residue  arithmetic,  but  using  systolic  arrays.
    Matula talked about a hyperbolic rational number system and how it
    can be simulated in any high level language that supports floating
    point.   The  idea sounded interesting.  A description of a faster
    modified Leibowitz  algorithm  for  computing  products  modulo  a
    Fermat  number  was  given  by  Truong  (USC).   The  algorithm is
    suitable for a VLSI implementation and has  some  applications  to
    coding   theory.    A   description   and   some   of   the  major
    characteristics of a finite  precision  lexicographically  ordered
    continued  fraction  number  system  developed by Matula (SMU) and
    Kornerup (Aarhus U) was given.


    Session 8 - Arithmetic in Signal/Image Processing
                (Chair: Peter Kornerup, Aarhus U)

         Taniguchi (Toshiba) presented an overview of their 'state  of
    the  art'  three  dimensional  IC work and its application to high
    speed image processing.  Comments were made to the effect that  it
    will take so long for the technology to advance to the point where
    3D chips could actually be produced  with  reasonable  yield  that
    single  layer  ICs will be as dense as the 3D chips.  Swartzlander
    (TRW Redondo Beach) described a VLSI  implementation  for  a  high
    speed  FFT  (Fast Fourier Transform) using floating point.  A chip
    for two dimensional DFT (Discrete Fourier Transform) was described
    in detail by Atkins (U of Michigan).  A pyramidal architecture for
    parallel image analysis was presented by  Stefanelli  (Politecnico
    di  Milano).   The processors are in a pyramidal structure and can
    be used to compress information from one level  to  the  next.   A
    description  of  a  hardware  approach  to  realize a fast residue
    arithmetic in order to speed up the FFT was presented by Taylor (U
    of Florida).


    Session 9 - Large-Scale Scientific Computers
                (Chair: Ahmed Sameh, U of Illinois, Urbana)

         Methods  for  using  parallelism  to  solve   PDEs   (partial
    differential  equations)  by  Gannon  (Purdue),  for computing the
    generalized singular  value  decomposition  of  a  matrix  by  Luk
    (Cornell)  and  for  finding  the  eigenvalues of a real symmetric
    matrix by Sorensen and Dongarra  (Argonne)  were  presented.   The
    eigenvalue  algorithm  is  a  very fast one, and it turns out that
    this parallel algorithm is faster even on  a  serial  machine  for
    higher  order  problems.   A  very  simplified  description of the
    algorithm is that it peels off  rank  one  pieces  of  the  matrix
    repeatedly  and  then  solves  the  simpler problems.  Hwang (USC)
    described a proposed  reconfigurable  multiprocessor  architecture
    for  implementing  various  compound  arithmetic  functions  (i.e.
    complex,  interval,   vector,   matrix,   polynomial,   etc.)   by
    interconnecting a number of simple arithmetic units via a network.
    He claimed that, for reasonably large problems, the speed of  this
    dynamic  system  is  almost  as  fast as a static method, and this
                                                                Page 5


    system has the advantage of being more versatile  and  potentially
    smaller.


    Session 10 - Fault-Tolerant Arithmetic
                 (Chair: Algirdas Avizienis, UCLA)

         Three of the four speakers for this session were not present.
    Avizienis  (UCLA)  discussed  an  error  correcting code for a two
    dimensional system.  Although there are some very  unlikely  cases
    where  miscorrection may occur, in general the error detection and
    correction sounds quite robust.


    Session 11 - New Arithmetic Systems
                 (Chair: Harvey Garner, U of Pennsylvania)

         A dynamically reconfigurable processor designed by Chiarulli,
    Rudd  and  Buell  that  will be constructed at Louisiana State was
    described.  It has a 256-bit ALU and can  be  partitioned  into  1
    giant  ALU,  or 2 128-bit ALUs, ..., or 8 32-bit ALUs.  Slices can
    be logically connected or not as desired, and there are local  and
    global  condition codes for each configuration (the implementation
    of the dual sets of condition codes was thought to be a good  idea
    by  a  member  of  the  audience  working  on  a similar project).
    Estimates by the researchers imply that it will  be  a  very  fast
    processor    for   computational   number   theory   and   integer
    factorization  (its  original   motivation).    Complex   interval
    division  with  maximal accuracy (i.e.  the smallest (rectangular)
    interval containment of the true region of solution) was described
    by  Wolff  von  Gudenberg  (U of Karlsruhe).  The algorithm can be
    slow due to the number of cases that may need to be evaluated, and
    an  accurate  scalar  product  of 3 or 4 floating point numbers is
    necessary in some cases.  Rump (IBM  Boeblingen)  gave  his  usual
    lecture  on the Kulisch-Miranker theory of semimorphisms and exact
    scalar product arithmetic, and its application to a "Higher  Order
    Computer   Arithmetic".    A   linear   system  involving  the  21
    dimensional Hilbert matrix was solved as an example of the utility
    of  ACRITH.   Bleher  (IBM Boeblingen), the ACRITH project leader,
    gave a talk describing ACRITH.  Once again ACRITH was presented as
    a complete system that appears to be able to solve any engineering
    problem.  No news about the  future  directions  was  given  until
    prompted  from  the  audience,  and then Bleher repeated the ideas
    given by Kulisch during the panel discussion.  Finally, Kahan  (UC
    Berkeley)  presented  the  'Anomalies  in  ACRITH' report.  He was
    relatively calm and didn't seem to generate any resentment  (which
    his  blunt  style  has been know to cause) from the audience.  The
    anomalies include examples demonstrating poor speed, excessive use
    of  memory,  false  claims, misleading documentation, poor results
    and unexpected results.   He  presented  more  examples  than  are
    listed  in  the report and only got party-line resistance from the
    ACRITH group in the audience (i.e.  everything is perfect the  way
    it  is...but  thanks  for pointing out some interesting facts; but
    you can't really mean that there is anything wrong with ACRITH).
                                                                Page 6


         After the official end of the session there  was  a  debating
    period  between  the  ACRITH  group  and Kahan.  The ACRITH people
    tried to explain some of the anomalies but  in  fact  demonstrated
    either  a  lack  of  understanding  of  the  real problems, or the
    intention  of  misleading  the  audience  with  irrelevant  points
    (please  note  that I, the author of this report, have tried to be
    strictly factual, but that I did co-author the 'Anomalies'  report
    with  Kahan).   Kahan  did  get  a  little more heated during this
    period, but was careful to tread  on  ground  where  he  knew  his
    facts.   As  usual  the  ACRITH  group  would  not acknowledge any
    problems with their product.

Wed 19-Jun-1985 15:41 Hudson, MA
T.RTitleUserPersonal
Name
DateLines