[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference 7.286::digital

Title:The Digital way of working
Moderator:QUARK::LIONELON
Created:Fri Feb 14 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:5321
Total number of notes:139771

3662.0. "64 Bit Computing, Advanced Systems article" by MROA::KAMINSKY_K () Wed Feb 01 1995 14:48

    In the January issue of Advanced System Magazine, there is an 
    article comparing a dual 190Mhz processor to an ALR Q-4SMP and an 
    HP 9000 700 mdel 755.
    
    They had some nice things to say about the Alpha server, however
    they made a statement that bothered me.
    
    "...64 bit chips offer much larger address space.  However, Advanced
    Systems has so far been unable to find a commercial application that
    exceeds the address space offered by 32-bit chips, hence requiring a
    64-bit chip.  (If you know of one, send a message to 
    [email protected])"
    
    It seemed they were saying that know one needs 64 bit computing so
    don't bother until there are applications that require it.
    
    This is a message that Digital needs to combat hard.  It is obviously
    a favorite come back of our competitors that can not offer 64 bit 
    hardware today.
    
    Can anyone get a reply to this guy?  Are there any applications
    requiring 64-bit computing?
    
    Ken
    
    
    
    
    
    
    
    
T.RTitleUserPersonal
Name
DateLines
3662.1WIBBIN::NOYCEAlpha's faster: over 4.2 billion times (per minute)Wed Feb 01 1995 15:221
See note 3628.
3662.2AXEL::FOLEYRebel without a ClueThu Feb 02 1995 15:308

	If nobody needs 64-bits, then why is IBM, HP and others working
	on 64-bit chips?

	Hmmm?

							mike
3662.3Who said anything about need?KOALA::HAMNQVISTReorg cityThu Feb 02 1995 16:347
|	If nobody needs 64-bits, then why is IBM, HP and others working
|	on 64-bit chips?

Because they think they know how to create the demand for it. We are
pioneering the technical issues for them. 

>Per
3662.4KOALA::HAMNQVISTReorg cityThu Feb 02 1995 16:363
BTW, is anyone from Digital lookinto into making NT a 64 bit system?

>Per
3662.5PCBUOA::KRATZThu Feb 02 1995 16:452
    Bill Gates (it is a Microsoft product) has said that they'll look
    into when and if customer demand dictates it.  kb
3662.6RT128::KENAHDo we have any peanut butter?Thu Feb 02 1995 17:4719
    Fifteen years ago, nobody believed an application could use up and
    entire 32-bit address space, either.
    
    Maybe 64 bit applications are "necessary" now -- but as long as the
    hardware can support it, someone will make use of it eventually.
    
    And "eventually" is likely to be closer to 5 years than 15.
    
    Bob Supnik has an interesting presentation on the evolution of
    computing power.  The work has, on the average, needed one more
    bit of address space per year.  Fifteen years ago, we went from
    16 to 32 bits.  Almost exactly 16 years later, we ran out of room
    on 32 bit machines.  
    
    So, today we may not need 64 bit machines, but  we surely do need (and
    are using) address spaces that require more than 32 bits.  The logical
    next step is 64.
    
    					andrew
3662.7Why wait for Gates?KOALA::HAMNQVISTReorg cityThu Feb 02 1995 17:5016
|    Bill Gates (it is a Microsoft product) has said that they'll look
|    into when and if customer demand dictates it.  kb

I thought the AXP port of NT was a result of our pro-active work not them
concluding, all by themselves, it was the right thing to do. And as only a
tiny fraction of the market is even able run a 64 bit OS I really doubt
Microsoft will undertake the work, by themselves, until the major HW players
have at least single digit desktop penetration.

Along those lines it would seem logical for us to facilitate Microsoft's
work to make it 64 bit. Because if we do, then we can proudly say that we
are one of the few that can offer 64-bit NT. That, in itself, could be
reason enough for some customers to buy our AXPs rather than wait for 
Intel to release a ''debugged'' 64 bit chip.

>Per
3662.8NETCAD::SHERMANSteve NETCAD::Sherman DTN 226-6992, LKG2-A/R05 pole AA2Thu Feb 02 1995 18:0323
    Man, I remember when folks figured that 16-bit microprocessors were
    more than what was needed.  I never heard anyone say, "We'll go to
    32-bit machines when and if the market demands it."  I think arguments
    against 64-bit are a smoke screen.  And, coming from Mr. Gates, I'm
    quickly reminded of DOS's 640K barrier ...  
    
    We all seem to be operating under an assumption that the PC industry is
    seeking the best standards for the consumer.  Truth is, the PC industry
    is in it for the bucks.  The reason that so many are talking down
    64-bits is simply because *they* can't make a buck with it, yet.
    
    Reminds me of when 7-Up advertized that they didn't have any caffeine.
    Cola companies were up in arms!  I was told by a cola vendor how
    caffeine in colas was "natural" and so forth.  Then, after the cola
    companies were able to introduce "caffeine-free" versions of their
    products things settled down.
    
    Same with 64-bit.  I figure HP, MS, Intel and every other vendor and 
    their dogs will shout the praises of 64-bit as soon as they've finished 
    setting themselves up to make a buck with it.  Meanwhile, all that
    talking up 64-bit would do is help us sell Digital's stuff ...
    
    Steve
3662.9GEMGRP::gemnt3.zko.dec.com::WinalskiCareful with that AXP, EugeneThu Feb 02 1995 18:1925
RE: .7

The Alpha port of NT came about because we went to Microsoft and 
convinced them that it was a good idea for them to have a RISC 
alternative for NT in case MIPS didn't make it.  At the time, both 
MIPS and the R5000 chip were in trouble.

We have never had much input into the technical or feature content of 
NT.  Our role has been supplying compilers and the hardware specific 
portions of the NT code, plus consulting and testing.  Microsoft 
retains full control over features and design.  We can offer advice 
and suggestions, but that's about all.

The PC marketplace is still making the transition from 16 bits to 32 
bits.  Gates is absolutely correct that this marketplace has no need 
for 64 bits right now.

Furthermore, moving to 64 bits would require some incompatible API 
changes.  Right now, well-behaved Win32 applications need merely to 
be recompiled and they will run on Windows NT for the Alpha.  If we 
switched to a 64-bit NT environment, a lot more work would be 
required to port apps.  We are having enough trouble attracting PC 
apps to Alpha NT without putting any more hurdles in place.

--PSW
3662.10Alpha chipULYSSE::KRESTICFri Feb 03 1995 05:435
    Does anyone know where a PPT picture of an Alpha chip can be found? I
    need it ASAP.
    
    
    Thanks.
3662.11NOVA::FISHERnow |a|n|a|l|o|g|Fri Feb 03 1995 06:555
    so, in ~30 more years, folks'll be clammoring for 128 bits?
    
    yowza,
    
    ed
3662.12:-)LARVAE::64419::JORDAN_CChris Jordan, DC is dead, long live SIFri Feb 03 1995 07:1116
No... in 30 years time folks will be saying "We will never need 128 bits". 




Meanwhile DIGITAL will be releasing its BetaBXP machines that will be 
already equiped with 128 bits!!


Meanwhile Intel will be releasing their 15-86 chip, that is fully 
compatible with all those DOS V3.3 applications and images that still 
exist. (Of course the 15-86 chip still has a 640K memory issue).

Cheers, Chris 


3662.13FILTON::SWANN_MNot all those who wander are lost.Fri Feb 03 1995 07:517
    All you have to do is think back to the '50s, when the then president
    of IBM said something like "all the worlds needs can be performed by 3
    computers"
    
    Hah!!
    
    Mike
3662.14Re .10 -- see TRINTY::DIGITAL_ARTLIBRARY -- KP7 to add...LJSRV2::KALIKOWDEC: Triumph of Open InnovationFri Feb 03 1995 09:0611
    This is the handiwork of the redoubtable Erik Goetze, arthacker &
    computer-art-toolmonger extraordinaire.  
    
    Also, if you're on the Web, hold onto your hat and go to --
    
    http://illustrator.pa.dec.com/html/visual-libraries.html
    
    -- and don't say I didn't warn you that you'd be gobsmacked!!
    
    :-)
    
3662.15We already had themDYPSS1::COGHILLSteve Coghill, Luke 14:28Fri Feb 03 1995 09:0617
   Re: Note 3662.8 by NETCAD::SHERMAN "Steve NETCAD::Sherman DTN 226-6992, LKG2-A/R05 pole AA2"
   

�    Man, I remember when folks figured that 16-bit microprocessors were
�    more than what was needed.  I never heard anyone say, "We'll go to
�    32-bit machines when and if the market demands it."  I think arguments
�    against 64-bit are a smoke screen.  And, coming from Mr. Gates, I'm
�    quickly reminded of DOS's 640K barrier ...  
    
   The reason no one was saying "We'll go to 32-bit machines when and if
   the market demands it" during the hey-day of DEC's 16-bit processors
   is because 32-bit processors (24-bit; and 36-bit; and 60-bit)
   already existed.  
   
   As strange as it may seem, a few people were actually clamoring for
   less capacity and DEC gave it to them.  PDP-8s and PDP-11s filled a
   low-end niche that other computer companies were ignoring.
3662.16It's That 33rd Bit...XDELTA::HOFFMANSteve; VMS EngineeringFri Feb 03 1995 10:035
    I doubt there are very many applications around that require 64 bits.
    There are applications around that require 33 bits -- and that's the
    bit that *reallly* counts...

3662.17KLAP::porterwho the hell was in my room?Fri Feb 03 1995 10:1819
>    I doubt there are very many applications around that require 64 bits.
>    There are applications around that require 33 bits -- and that's the
>    bit that *reallly* counts...
> 

Nonsense - it's the low bit that *reallly* counts.

Consider

	for (i=0; i<=INT_MAX; i++)
		;

Now, which bit changes state most often?  Therefore, it's the
bit that's doing the most work.  And therefore, it's the bit
that *really* counts.

Maybe that's the problem - we added 32 new bits at the wrong
end?

3662.18:-}LGP30::FLEISCHERwithout vision the people perish (DTN 297-5780, MRO3-3/L16)Fri Feb 03 1995 10:226
re Note 3662.17 by KLAP::porter:

> Maybe that's the problem - we added 32 new bits at the wrong
> end?
  
        Let's not get into the endian religious wars again....
3662.19HDLITE::SCHAFERMark Schafer, AXP-developer supportFri Feb 03 1995 10:262
    maybe we should recommend that all FOR loops count down from INT_MAX to
    zero.  :-)
3662.20CSOA1::BROWNEFri Feb 03 1995 12:136
    Re: .15
    
    	Good point, Steve! And just so we do not delude ourselves, there
    are techniques to extend the address space of a 32-bit machine by a
    couple of bits without going full-scale to a new architecture.
    
3662.21About the Editor's RequestI4GET::HENNINGFri Feb 03 1995 12:3310
    Hmm, I had hoped that the reply in .1 would send this string back into
    note 3628, rather than starting a new one.  But since it did not, I
    must reiterate my plea that if you decide to respond to Mark Cappel's
    request (in .0), *** please *** let me know about it.  
    
    Details of why in 3628.5.
    
    Thanks
    	/john henning
         csg performance group
3662.22Yes he has seen the releaseI4GET::HENNINGFri Feb 03 1995 12:343
    ps Mark has already received the oracle/digital press release,
    apparently several times, and said to me that he only wished he had
    known about that at the time he was writing the article
3662.23It's the 33rd bit, stupid!ATLANT::SCHMIDTE&amp;RT -- Embedded and RealTime EngineeringFri Feb 03 1995 12:568
Steve Hoffman:

  That's *EXACTLY* the reply I was going to write.  And right now,
  we've got the 33 bit (along with a few others) and *THEY* don't,
  so to hide that embarassment, *THEY* talk about how no one needs
  the 64th bit and hope no one notices the subtle obfuscation in
  their argument.
                                   Atlant
3662.24Bits 33 and 34 most welcomeSNOC02::HAGARTYDMein Leben als HundMon Feb 06 1995 01:0215
Ahhh Gi'day...�

    Well, when  I  was  doing  work with the VAX 9000 and the SuperComputer
    Technology  Centre, we ran out of address space and physical memory ALL
    THE  TIME.   People (the BIGGIES) would bring in code all the time that
    had  been  "crippled"  to  fit it into a VAX (there ARE Bigger ones out
    there).

    I had  dinner  one  night  with  a senior VMS engineer while doing this
    work.   He  was  sounding  off  about  how people wanted all this extra
    addressing  support  in  VMS,  and  how  nobody  needed it.  I tried to
    enlighten  him,  but  seeing that the limit of his experience was BLISS
    compiles and EDT I don't think he believed me.

    This is a niche market, but it's going to become more mainstream.
3662.25PASTIS::MONAHANhumanity is a trojan horseMon Feb 06 1995 05:3724
    	There will always be some problems that take more address space
    than you think.
    
    	One of the first benchmarks I ran on a VAX was modelling a gas
    turbine.
    
    	Each point in the turbine has a position (3 coordinates), gas
    velocity (4 coordinates), pressure (1 coordinate), temperature (1
    coordinate), and quite likely other factors such as chemical
    composition. They needed a multidimensional array to keep track of all
    this stuff, and with only 100 points along some axes they needed a 120
    megabyte array and thought it quite insufficient. This was in 1978.
    I imagine they would be a little more exacting now. If they had gone to
    1000 points per axis on their model they would have needed a terabyte
    array, and then since they were doing matrix transformations they would
    have needed several of these.
    
    	Incidentally, we won the benchmark. The VAX-11/780, even though it
    had a maximum physical memory of 2 megabytes could handle their problem
    in an acceptable time. IBM, who at the time had a maximum of 16
    megabytes of virtual address space, had to translate the benchmark into
    accessing a disk file explicitly instead of having a large virtual
    array, and came in with a much worse benchmark time, even with their
    fastest machine.
3662.26Applications Not Hard to Find...HLDE01::VUURBOOM_RRoelof Vuurboom @ APD, DTN 829 4066Mon Feb 06 1995 06:0638
    In fact, most physical process modelling can directly scale up to 
    beneficially use more than 32 bits. Any of the examples below
    will immediately show qualitive improvement by using a far denser
    data point topology:
    
    	- metorological modelling
    	- fluid modelling (airplane wing/car body/ship hull)
    	- dynamics (fighter plane, propellor cavitation)
    	- chemical process modelling
    	- earthquake and volcanic activity prediction (actually esoteric 
          fluid modelling)
    	- oil exploration
    	- galaxy formation
    	   	
    (Discrete) simulation applications also can generally be beneficially
    scaled up. By beneficial I mean that the results are qualitatively
    better.
    
    The true constraint has more to do with the accepted turn around time
    for a model run than anything else. Depending on the application my
    guess is that an acceptable turn around time for most applications
    is somewhere between 10 minutes and 24 hours. The accepted turn around 
    time together with the processing time needed for each data point 
    together determine the number of data points that can be processed. 
    This number is then often declared to provide "sufficient" quality.
    As the number of data points that can be successfully processed
    increase so to (coincidentally) does the norm of what constitutes 
    "sufficient" quality.
    
    Look for those processes which require non-intensive processing
    for each datapoint as these can handle the most datapoints with
    the resultant push for larger address spaces.
    
    re roelof
    
    
     
    
3662.27TLE::REAGANAll of this chaos makes perfect senseMon Feb 06 1995 10:0612
    I have a customer in Connecticut that does finiancial modelling for
    wall street firms.  I've seen snipets of their code.  I've seen
    a single array weigh in a 82MB, dozens of routines with stack local
    variables in excess of 4MB (some of the routines are recursive by the
    way), and 3-dimensional arrays with array cells larger than 100,000
    bytes each.
    
    While they aren't taking advantage of the 33rd bit yet (they are
    an OpenVMS Alpha shop), I have a feeling they'll want to move
    to OSF/1 sooner or later for that 33rd bit.
    
    				-John
3662.28LGP30::FLEISCHERwithout vision the people perish (DTN 297-5780, MRO3-3/L16)Mon Feb 06 1995 10:229
re Note 3662.27 by TLE::REAGAN:

>     While they aren't taking advantage of the 33rd bit yet (they are
>     an OpenVMS Alpha shop), I have a feeling they'll want to move
>     to OSF/1 sooner or later for that 33rd bit.
  
        Isn't 64-bit support coming "soon" for OpenVMS Alpha?

        Bob
3662.29ODIXIE::MOREAUKen Moreau;Sales Support;South FLMon Feb 06 1995 10:3021
RE: last several

One of my customers does severe weather modelling (read: hurricanes), and
runs their application on a variety of machines.  They have built some
constants into their FORTRAN code which control the precision at which the
model runs (that is, what is the granularity of the objects on which the
model acts).  The precision can range from square miles (where the model
can easily run on DOS PCs) to individual air/water/soil molecules (where 
they need many more than terabytes of memory, with no machine being
capable of handling it today).  Turn down the precision and it runs in 
very small memory spaces with very good turn around but poor results.  Turn 
up the precision and you get very good answers, but you need machines bigger
than we have today and the model will take much longer to run than the
weather will take in the real world, so it is useless for predicting where
the hurricane will strike next.  So they have the value set somewhere in
the middle range.

But as machines get bigger and faster, all they have to do is change a couple
of constants.  A very elegant solution to the problem...

-- Ken Moreau
3662.30TLE::REAGANAll of this chaos makes perfect senseMon Feb 06 1995 12:5513
    RE: .28  
    
    1) OpenVMS only said they wanted C, C++, and Fortran support.  This
    customer is using Pascal.  OpenVMS isn't even getting Fortran
    support...
    
    2) OpenVMS is only going to support heap-allocated memory in the
    high-bits.  This customer would really like the stack to grow
    in the high-bits.  They also have lots of static data that they
    would like to place there as well.  Neither you'll be able to do
    on OpenVMS (given my understanding of what is going on).
    
    				-John
3662.3132 bit chips are still aliveASABET::SILVERBERGMy Other O/S is UNIXMon Feb 06 1995 14:28100
 Worldwide News                                              LIVE WIRE
 ------------------------------------------------------------------------------
 ARM agrees to develop high performance, low ...             Date: 06-Feb-1995
 ------------------------------------------------------------------------------
        ARM agree to develop high performance, low power chips 
 
         Digital and Advanced RISC Machines Ltd. (ARM) of Cambridge, 
   England have announced the licensing of the ARM RISC architecture to 
   Digital Semiconductor for the development of high performance, low 
   power microprocessors. 
   
         The StrongARM family of 32-bit RISC products to be developed 
   under the agreement is intended to complement and broaden the existing 
   ARM product line for performance-critical applications such as:
   
         o  next-generation personal digital assistants (PDAs) with 
            improved user interfaces and communications;

         o  interactive TV and set-top products;
         
         o  video games and multimedia 'edutainment' systems with
            realistic imaging, motion and sound; and
         
         o  digital imaging, including low cost digital image capture 
            and photo-quality scanning and printing.
   
         "For Digital Semiconductor, this is a strategic agreement that 
   both reinforces our merchant vendor role and demonstrates performance 
   leadership," said Ed Caldwell, vice president and general manager of 
   Digital Semiconductor.  "Today, our Alpha products provide unmatched 
   performance for desktop and server applications.  The StrongARM product 
   line will complement this strategy with its focus on enhancing 
   performance for mass-market applications in which very low power 
   dissipation is critical.
   
         "This agreement with ARM also gives us early entry into rapidly 
   growing, high volume markets,"  Caldwell added.  Industry analysts 
   estimate that the market for 32-bit RISC embedded consumer applications 
   will grow 75 percent year over year to more than $10.5 million in 1998, 
   according to an October 1994 report from InStat.

         According to Robin Saxby, managing director and CEO of ARM, 
   "Having Digital Semiconductor jointly design and build new processors 
   compliant with the ARM architecture will add momentum to ARM's 
   acceptance as the volume RISC standard for 32-bit applications.  ARM 
   processors already have the best ratios of performance to power 
   consumption and cost (MIPS/Watt and MIPS/$).  The agreement with 
   Digital will maintain our lead in these areas while allowing us to 
   pursue applications demanding very high absolute performance," Saxby 
   said. 
   
         Shane Robison, vice president and general manager of Apple 
   Computer, Inc.'s Personal Interactive Electronics Division, said Apple 
   was an early adopter of ARM microprocessor technology and had 
   incorporated the ARM 610 processor into its market-leading Newton 
   MessagePad PDA. 
   
         "Apple's Newton engineering team has been working closely with 
   Digital Semiconductor and ARM in defining the first StrongARM 
   microprocessor," Robison said.  "This design looks to significantly 
   boost compute performance while retaining the low power characteristic 
   of ARM microprocessors, both of which are critical in designing high 
   performance PDAs." 

         Jerry Banks, director/principal analyst at Dataquest said that 
   the relationship "looks to be a perfect strategic fit."  Banks noted 
   that the agreement will give ARM access to high performance 
   microprocessor design and process technology, while Digital "gains 
   ARM's expertise in low power design, as well as access to high volume 
   markets with significant potential."  
   
         The first product in the StrongARM family is currently under 
   development at Digital Semiconductor's Palo Alto, Calif., and Austin, 
   Texas, research centers and ARM's Cambridge, England headquarters.  
   
         The products developed under the agreement will be sold through 
   Digital Semiconductor's sales channels.  In addition, "processors and 
   processor cores developed under this agreement will be available for 
   licensing to other semiconductor partners," added Saxby.  "This is 
   consistent with our strategy of making the ARM architecture an open 
   standard for performance oriented, power-efficient and cost-
   effective applications."

         ARM designs, licenses and markets fast, low cost, low power 
   consumption RISC processors for embedded control, consumer/
   educational multimedia, DSP and portable applications.  ARM licenses 
   its enabling technology to semiconductor partner companies, who focus 
   on manufacturing, applications and marketing.  Each partner offers 
   unique ARM related technologies and services, which together satisfy a 
   wide range of end-user application needs.  ARM also designs and 
   licenses peripherals, supporting software and hardware
   tools and offers design services, feasibility studies and training. 
   This results in a global partnership committed to making the ARM 
   architecture the volume RISC standard.  ARM's partners include: 
   VLSI Technology, GEC Plessey Semiconductors, Sharp Corporation, Texas 
   Instruments, Cirrus Logic, Samsung, AKM and Digital Equipment 
   Corporation.  ARM was formed in 1990 by Acorn Computers, Apple Computer 
   and VLSI Technology with Nippon Investment and Finance 
   (a Daiwa Securities subsidiary) investing in 1992.

3662.32MRKTNG::SLATERMarc, ASE Performance GroupMon Feb 06 1995 20:2020
There's an ISV who wants to be able to do statistical analysis against a 
mere 5% of the full U.S. Census returns (13,000,000 returns at about 1K
per return is, hmm, 13 Gbytes of data).  They'd like to run the analysis
out of memory to get a reasonable turn-around time. (before anyone asks
the names and addresses are randomly interchanged to protect privacy).
Can't do the whole thing on 32bit systems.  We're about to on Alpha.

There's a customer who wants to do distribution resource planning for about 
1,000,000 SKU-locations, and each SKU is about 1K of data. They'd like to run
the models out of memory so that the interactive users can get a reasonable
turn-around time.  No 32bit system has been able to do all of the SKU-locations
at once (we're not yet sure if Alpha can either, but it stands a good chance). 

There are many customers who would like to use off-the-shelf sort routines
to sort 4+ Gbytes of data.  Putting the temporary files into memory adds a
little zing to the sort. 32bit systems can't do this.  Alpha can.

There are other examples like this where Alpha is the best solution, if not
the only one.  These problems existed before, but until Alpha came along,
they had to be solved in more costly, time-consuming, less efficient ways.
3662.33alpha_only notes fileRANGER::BRADLEYChuck BradleyTue Feb 07 1995 13:314
there was a notes file called alphaonly, but it seems to be gone now.
maybe someone can find the archive.

3662.34Other bennefits of 64 bits?KAOM25::WALLTue May 02 1995 11:4212
    Everyone seems hung up on the "necessity" for more than 32 bits, but
    what about the impact to ...
    ...memory transfer bandwidth
    ...calculation times (large integer, floating point)
    ..."probable" op codes contained per fetch (granted you could be
    executing a branch).
    
    Does a 64 bit processor not also inherently buy some performance other
    than sheer address size?
    
    Rob Wall
    
3662.35QUARK::LIONELFree advice is worth every centTue May 02 1995 13:5613
    64 bits has no effect on calculation times - data types are independent
    and don't magically change when the "word size" changes.  The only data
    type affected is that to store an address.
    
    Memory transfer also is not directly affected.
    
    Opcodes depends on what the instruction architecture looks like.  In
    the case of Alpha, it's 32 bits.
    
    The "64 bit" nature of Alpha is its address space - though it also has
    64-bit integer types.  
    
    				Steve
3662.36ATLANT::SCHMIDTE&amp;RT -- Embedded and RealTime EngineeringTue May 02 1995 14:1213
Steve:

> 64 bits has no effect on calculation times - data types are independent
> and don't magically change when the "word size" changes.

  That's not 100% true. Real adders are built out of real logic
  circuits, and it takes longer to propagate all the carries across
  a 64-bit word than it does across a 32-bit word. Not a lot longer,
  but somewhat longer. I *THINK* it's a log(n) problem but I'm cer-
  tainly not up on the latest carry-lookahead adder designs.

                                   Atlant

3662.37I think there is some benefit to 64 bit ints.HGOVC::JOELBERMANWed May 03 1995 11:5521
    Steve,
    
    Can;t one make a case that having 64-bit integers speeds up
    calculations though?
    
    FOr certain ranges of values, integer types can be used where reals
    or double word math was needed previously.
    
    For block moves the number of instructions executed can drop.
    
    For encryption/decryption and compression codes, many fewer operation
    need be performed when 64-bit ints are avaialable.
    
    I agree that increased address space is the major benefit, but do
    believe there are a number of specific instances where performance
    benefits from 64-bits.
    
    And not having byte data types seems to hurt in some cases.
    
    
    
3662.38QUARK::LIONELFree advice is worth every centWed May 03 1995 14:216
I never suggested there was no benefit to 64-bits.  I was simply stating that
some of the claimed drawbacks were not attributable to "64-bits".  There
are indeed many benefits to a 64-bit architecture - just having a
64-bit integer type is significant to many.

					Steve
3662.39SX4GTO::OLSONDoug Olson, ISVETS Palo AltoMon May 08 1995 14:327
    I'm late coming in to this, but some of the research folks at WRL
    studied costs explicitly due to 32-bit vs 64-bit memory access
    dereferencing.  I think Jeff Mogul's and Alan Eustace's names were
    among those on the paper, which was published in USENIX '95 Procedings,
    and probably available through the WRL home page.
    
    DougO
3662.40not only address space!!!NAMIX::jptFIS and ChipsTue May 09 1995 04:0111
>    64 bits has no effect on calculation times - data types are independent

	Well, 64 bits has an effect on calculation times if algorithms 
	support it. For example many algorithms can be excuted with lesser
	load/store AND arithmetic ops if they support 64 bit data types.

	Where 64 bits hasn't effect is time of single operation, and I guess
	that this is what Steve meant.

	regards,
			-jari
3662.41ATLANT::SCHMIDTE&amp;RT -- Embedded and RealTime EngineeringTue May 09 1995 11:0329
jari:

> Where 64 bits hasn't effect is time of single operation, and I guess
> that this is what Steve meant.

  That's USUALLY true once an architecture is commited to a specific
  implementation in silicon, but it's NOT TRUE AT ALL when you're
  first designing an architecture.  Here's what I wrote in .36:

> That's not 100% true. Real adders are built out of real logic
> circuits, and it takes longer to propagate all the carries across
> a 64-bit word than it does across a 32-bit word. Not a lot longer,
> but somewhat longer. I *THINK* it's a log(n) problem but I'm cer-
> tainly not up on the latest carry-lookahead adder designs.

  Also, sometimes hardware implementors do exactly the same thing
  in hardware that we do in software: Fabricate larger datatypes
  out of data paths that are only as wide as some smaller datatype.
  That is, a 64-bit architecture can be made to run on a 32-bit-wide
  datapath. But in this case, using 64-bit datatypes will force a
  "double-cycling" of the data paths and this WILL CERTAINLY affect
  the execution rate of the program.

  I believe you'll find examples of this in the VAX architecture
  and its various implementations (with the 11/730 and the MicroVAX-I
  being likely candidates) and I know you'll find examples of this in
  the PDP-11 architecture (with the LSI-11 being the obvious one).

                                   Atlant
3662.42re .41NAMIX::jptFIS and ChipsTue May 09 1995 12:075
Re: .41

Correct. I should have phrased my words better.

	-jari