[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference ulysse::rdb_vms_competition

Title:DEC Rdb against the World
Moderator:HERON::GODFRIND
Created:Fri Jun 12 1987
Last Modified:Thu Feb 23 1995
Last Successful Update:Fri Jun 06 1997
Number of topics:1348
Total number of notes:5438

776.0. "Latest TPC-A Results & Competitive Results" by CIMNET::BOURDEAU (Rich Bourdeau CIM Product Marketing) Tue Oct 30 1990 14:38

From:	TPS::SHEN "Hwan Shen, TPSG/SPC, 227-4426, TAY1-2/H9  26-Oct-1990 1040" 26-OCT-1990 11:06:53.98
To:	@TP-PERF.DIS
CC:	SHEN
Subj:	Part I - Availability of Digital TPC-A Results for the VAX 6000 Model 510 System and Re-pricing of Previous Systems






        +---------------------------+ TM
        |   |   |   |   |   |   |   |
        | d | i | g | i | t | a | l |           INTEROFFICE MEMORANDUM
        |   |   |   |   |   |   |   |
        +---------------------------+




        TO:  TP Performance Distribution        DATE:  October 26, 1990
                                                FROM:  Hwan Shen
                                                DEPT:  TPSG/Systems 
                                                       Performance and
						       Characterization
                                                LOC:   TAY1-2/H9
                                                DTN:   227-4426
                                                ENET:  TPS::Shen




        SUBJECT:  Availability of Digital TPC Benchmark (TM) A 
                  Full Disclosure Reports for the VAX 6000 Model 
                  510 System and Re-pricing of Previous Systems                 



        Digital has registered and released TPC Benchmark (TM) A 
        results on VAX 6000 Model 510 system.
        
        The test was internally audited, externally audited (by KPMG 
        Peat Marwick), and officially registered with the TPC Administrator 
        (as required) on October 25, 1990.
        
        Digital has also submitted and registered price changes for four 
        TPC-A tests performed on the VAX 9000 Model 210 and VAX 4000 
        Model 300.  The changes are based on:
    
        1) New prices taken from the Digital U.S. Price List, dated
           October 1, 1990.

        2) The use of server package price for the VAX 4000 Model 300
           back-end host using VAX ACMS V 3.1, VAX Rdb/VMS V4.0 and VAX
           DECforms V1.2 software.
     
        3) The use of general purpose system prices for the front-end 
           processors.
 





        Attached is a summary of test results.  Also included is a summary
        of the tests that HP and IBM ran and registered.  
        In separate mail, I will send you a summary (TPS vs. K$/TPS) 
        postscript graph which, to view, you must "extract/noheader" to
        a file and then print the file in postscript format.

        The VAX 6000 Model 510 TPC-A test reports contain a "Results" 
        report and a "Full Disclosure" report.  The "Results" report
        will be printed in glossy format in about a month.  Copies can
        be ordered through the Northboro distribution center with part 
        number EC-N0407-57.  The reports and re-pricing detailed 
        spreadsheets are also available on-line.  The on-line version 
        does not include the KPMG attestation letter or the TPC-A standard.
 
        The on-line reports and re-pricing detailed spreadsheets can be 
        found in the public directory and copied to your system as follows:  


        $copy TPSDOC::SYS$PUBLIC:[TPSG_PAMS_DOCS]6510_Rdb_RESULTS.PS  *.*

        $copy TPSDOC::SYS$PUBLIC:[TPSG_PAMS_DOCS]6510_Rdb_DISCLOSURE.PS  *.*

        $copy TPSDOC::SYS$PUBLIC:[TPSG_PAMS_DOCS]RE_PRICING.102590  *.*


        If you have any questions, please contact me.









                           Digital's TPC Benchmark A Results
                           ---------------------------------




    DECtp Systems Results
    ---------------------


                                                             Old       New**   
         CPU Model          Software              TPS*      K$/TPS*   K$/TPS* 
  ----------------------------------------------------------------------------

    VAX 4000 Model 300      VMS V5.4              21.7      $31.9K      NA
                            VAX ACMS V3.1
                            VAX Rdb/VMS V3.1  
                            VAX DECforms V1.2

    VAX 4000 Model 300      VMS V5.4              21.6      $32.1K    $28.3K  
                            VAX ACMS V3.1
                            VAX Rdb/VMS V4.0  
                            VAX DECforms V1.2

    VAX 6000 Model 510      VMS V5.4              32.5        NA      $35.9K
                            VAX ACMS V3.1
                            VAX Rdb/VMS V4.0  
                            VAX DECforms V1.2

    VAX 9000 Model 210      VMS V5.4              69.4      $40.0K    $41.8K
                            VAX ACMS V3.1
                            VAX Rdb/VMS V4.0  
                            VAX DECforms V1.2


    VAX DSM Results
    ---------------

                                                             Old       New**   
         CPU Model          Software              TPS*      K$/TPS*   K$/TPS* 
  ----------------------------------------------------------------------------

    VAX 4000 Model 300      VMS V5.4              41.4      $24.1K    $22.9K
                            VAX DSM V6.0

    VAX 9000 Model 210      VMS V5.4             143.4      $29.3K    $30.0K
                            VAX DSM V6.0

    Note: The TPC-A Full Disclosure Reports were submitted to the TPC 
          Administrator on July 9, 1990, July 30, 1990 and October 25, 1990. 
  ----------------------------------------------------------------------------
   *  TPS refers to the tpsA-Local performance metric, in accordance with
      the TPC Benchmark A Standard for a local area network configuration.
   ** Digital has submitted to the TPC "new K$/TPS" numbers for the 4 systems
      we have tested on October 25, 1990.








                          HP's TPC Benchmark A Results
                          ----------------------------



                                                              Old       New   
         CPU Model          Software                TPS*     K$/TPS*   K$/TPS* 
  ----------------------------------------------------------------------------


    HP 3000 Series 920      MPE XL 2.1/COBOL II      4.95    $33.0K    $33.0K
                            ALLBASE/SQL 2.1  


    HP 3000 Series 922      MPE XL 2.1/COBOL II      7.7     $35.0K    $33.1K
                            ALLBASE/SQL 2.1  


    HP 3000 Series 932      MPE XL 2.1/COBOL II     13.6     $33.3K    $31.3K
                            ALLBASE/SQL 2.1  


    HP 3000 Series 949      MPE XL 2.1/COBOL II     32.2     $29.3K    $26.9K
                            ALLBASE/SQL 2.1  


    HP 3000 Series 960      MPE XL 2.1/COBOL II     38.2     $36.5K    $32.6K
                            ALLBASE/SQL 2.1  


    Note: HP results were obtained from the TPC-A Full Disclosure Reports 
          submitted to the TPC Administrator by Hewlett Packard on January 
          9, 1990 and May 31, 1990.
  ----------------------------------------------------------------------------
   *  TPS refers to the tpsA-Local performance metric, in accordance with
      the TPC Benchmark A Standard for a local area network configuration.
   ** HP has submitted to the TPC "new K$/TPS" numbers for the 5 systems
      they have tested on September 9, 1990.








                          IBM's TPC Benchmark A Results
                          -----------------------------




         CPU Model             Software                 TPS*       K$/TPS*
  ----------------------------------------------------------------------------


    IBM AS/400 Model C25       OS/400, COBOL/400         7.17      $26.37K
    (IBM 3151 terminal         Application Develop.
     configuration)            Tools


    IBM AS/400 Model C25       OS/400, COBOL/400         7.75      $29.74K
    (IBM 3476 terminal         Application Develop.
     configuration)            Tools
                                              
    IBM AS/400 Model B45       OS/400, COBOL/400         8.34      $35.87K
    (IBM 3151 terminal         Application Develop.
     configuration)            Tools


    IBM AS/400 Model B45       OS/400, COBOL/400         8.98      $36.98K
    (IBM 3476 terminal         Application Develop.
     configuration)            Tools


    IBM AS/400 Model B60       OS/400, COBOL/400        20.45      $32.00K
    (IBM 3151 terminal         Application Develop.
     configuration)            Tools


    IBM AS/400 Model B70       OS/400, COBOL/400        27.1       $31.27K
    (IBM 3151 terminal         Application Develop.
     configuration)            Tools



    Note: IBM results were obtained from the TPC-A Full Disclosure Reports
          submitted to the TPC Administrator by IBM on August 21, 1990
          and September 1990.
  ----------------------------------------------------------------------------
   *  TPS refers to the tpsA-Local performance metric, in accordance with
      the TPC Benchmark A Standard for a local area network configuration.








                     Reports in the TPC Benchmark Series
         ----------------------------------------------------------


         Number 1    TPC Benchmark A Results for the    EC-N0301-57
                     VAX 4000 Model 300 System using    
                     VAX ACMS, VAX Rdb/VMS, and
                     VAX DECforms Software 


         Number 2    TPC Benchmark A Results for the    EC-N0302-57
                     VAX 9000 Model 210 System using    
                     VAX ACMS, VAX Rdb/VMS, and
                     VAX DECforms Software 


         Number 3    TPC Benchmark A Results for the    EC-N0241-57
                     VAX 4000 Model 300 System
                     using VAX DSM V6.0 Software


         Number 4    TPC Benchmark A Results for the    EC-N0243-57
                     VAX 9000 Model 210 System
                     using VAX DSM V6.0 Software


         Number 5    TPC Benchmark A Results for the    EC-N0407-57
                     VAX 6000 Model 510 System using    
                     VAX ACMS, VAX Rdb/VMS, and
                     VAX DECforms Software 

T.RTitleUserPersonal
Name
DateLines
776.1Take your LUMPSTRCA03::MCMULLENKen McMullenTue Oct 30 1990 15:3113
    What's this company coming to ... we will highlight the DSM numbers
    just to give us high TPS and low $/TPS numbers. Why don't we hire the
    ORACLE marketing department. The .0 of this note just highlights DSM
    and not Rdb. Is DSM bringing in more dollars than Rdb today? 
    
    Post MUMPS numbers in the MUMPS notesfile. Post Rdb/VMS numbers here
    please! Don't forget raw performance isn't the magic solution.
    
    This is an internal notesfile, lets get a grip on reality.
    
    regards,
    
    Ken McMullen
776.2V4.0 or T4.0?WIBBIN::NOYCEBill Noyce, FORTRAN/PARALLELTue Oct 30 1990 17:416
    This report continues to show V4.0 as slower than V3.1, on the VAX
    4000-300.  I assume no new tests were run on the VAX 4000, but that
    we're just reporting the improved K$/TPS based on new prices (and
    as a side-effect, propagating the myth that V4 is slower).  What
    "version" of V4.0 was run on the VAX 6000-510?  Have we run TPC-A
    on 6000-520 thru -560?
776.3time to take my lumpsTRCA03::MCMULLENKen McMullenTue Oct 30 1990 23:1117
    I take back part of my .1 tirade and humbly apologize re the lack of
    Rdb results .... finger convulsions made me miss the Rdb results. After
    thinking I realized they had to be there. But I still think the
    publishing of MUMPS numbers is almost equivalent to ORACLE's famous TP1
    numbers. Most commercial customers would not find MUMPS acceptable,
    plus the field would not be able to support it, plus....the list goes
    on. 
    The numbers may be accurate, but do they do us more harm or good.
    I believe the MUMPS numbers may get customers to ask questions about
    what is wrong with Rdb, DBMS, and RMS. Correct me if I am wrong, but I
    don't believe they are going to ask to see MUMPS.
    
    Also note .2 needs to be clarified. Steve Horn many months ago
    posted a note stating those numbers may not be accurate for ACMS/Rdb.
    But I see VAX 9000 presentations using them, and no clarification of
    which field test version of Rdb 4.0 has been used to produce the
    results. 
776.4Which field test level of Rdb was used?BIGUN::PHILLIPSBlair Phillips, SI, Canberra, OzWed Oct 31 1990 12:588
    The results reported in .0 were obtained with a field test version of 
    Rdb V4.0. Anyone know which field test version was used?
    
    Also, they used the superseded MicroVAX 3100 Model 10 rather than the
    Model 10E. I guess this won't have much impact on the cost, but every
    little bit helps.
    
    	Blair
776.5Selectively Filter the InformationCIMNET::BOURDEAURich Bourdeau CIM Product MarketingWed Oct 31 1990 14:2824
    RE: Ken
    
    I posted the memo as is.  I did not feel as though I had the right to
    edit the original memo.  I felt that the Rdb information was useful
    useful to the this audience.  I agree with your statement about using
    MUMPS to inflate the TPS numbers.  MUMPS is useful in niche markets,
    but is definitly not main stream.  Just because this report contains 
    MUMPS data does not mean you should show it to your customer.  The 
    official glossy Attestation Reports audited by Peat Marwick are split.  
    One for ACMS/RDB and another for MUMPS.
    
    .0 contains a lot of information.  Use only what you need to prove the
    point you are trying to make.
    
    
    RE:  Rdb 4.0 VS 3.1 performance
    
    From what I understand, these are the final numbers registered with the 
    TPC.  Thay are also the numbers in the glossy VAX 4000 and VAX 9000 
    Attestation Report.  I remember seeing Steve's memo regarding the testing 
    being done with a field test version.  I also rembember seeing another 
    memo saying that it was a very late field test version with no significant
    changes compared to to the final shiping version.  We need Steve to
    clarify this point, because there is obviously some confusion.
776.6mumps it is...RANCH::DAVISRiding off into the sunset..Wed Oct 31 1990 16:267
    What's the confusion guys?  Apparently Version 4 is slower then 3.1!
    
    And Mumps is faster than our flagship database and TP monitors...I
    wonder if the DSM folks have stopped laughing yet.
    
    Score 1 marketing coup for their organization....
    
776.7Same tests as beforeNOVA::HORNSteve Horn, Database SystemsWed Oct 31 1990 17:1115
    To:  All concerned
    
    I don't consider FT2 as a 'very late field test' considering we are up
    to FT7.  Indeed these are the same old numbers from last time and my
    comments still stand.  The re-release of these numbers makes me wonder.
    In this situation I would simply have dropped the V4.0 numbers...since
    they were early field test numbers (and NOONE asked ME if they would
    represent final V40 performance), and the V40 test was NOT as well tuned
    as the original V31 test, and V40 has yet to be released.
    
    In more recent tests we have seen about a 10% improvement in V40 at the
    high end and even performance at the low end.
    
    Steve Horn
    Rdb/VMS Product Manager
776.8How to control the messages?TRCA01::MCMULLENKen McMullenWed Oct 31 1990 19:1811
    Steve,
    
    Is there no way to control this type of information. It makes our lives
    in the field very difficult when consistent messages are not delivered
    across the different product marketing messages. I 
    believe it is good for the company to have the different product groups
    "philosophically" discuss differences, but this should not occur in public
    consumption material. 
    
    
    Ken M.
776.9I thought we did!NOVA::HORNSteve Horn, Database SystemsWed Oct 31 1990 22:0611
    
    Ken,
    
    I thought we had put in place a process to prevent this from happening
    again.  I was under the mistaken impression that I was to be notified
    (as PM for one of the products involved) before any public statement
    was made.  I would suggest you send mail to the owner of the process,
    Peter TPSYS::Powell, as obviously it is broken.  The TP folks don't
    seem to think there is a problem here.
    
    Steve
776.10Is GIl right?MBALDY::LANGSTONRdb Sales Support MercenaryThu Nov 01 1990 19:4915
I'm very confused.  We are about to release Rdb V4.0, (aren't we?) we're up to
FT7 of V4 and we still can't release any numbers that show it is up to "10%"
faster than V3.1?

Is V4 or is it not faster?  Does Rdb Product Management have any say in what 
numbers are released or not?  If it's true that V4 is slower, as Gil suspects,
then maybe we need to think of we should tell our customers they're getting for
upgrading (and all the headaches associated therewith) to a presumably better 
version of Rdb.

I went in to our sales literature library and threw a pile of those nice
shiny TPC reports in the trash, to keep them from being used against us.


Bruce
776.11One application of .5CLARID::SIELKERHermann S., TP/DB cons. svcs., EIS EuropeFri Nov 02 1990 14:3060
  Re .5:

  > .0 contains a lot of information.  Use only what you need to prove the
  > point you are trying to make.

  Attached is what internal publications used (extracted from VOGON News,
  Nov. 2, 1990):

 Digital - Reports industry's best TP numbers
	{Livewire, 1-Nov-90}
   Digital has announced the best transaction processing numbers in the
 industry, according to benchmark results disclosed to the Transaction
 Processing Performance Council (TPC).  
   Benchmarks were performed on VAX 4000 Model 300 and VAX 9000 Model 210
 systems running VAX DSM software, Digital's integrated applications
 environment based on ANSI Standard MUMPS.  
   For this test, the TPC Benchmark (TM) A (TPC-A) was the testing method used.
 To report measurements using the TPC Benchmark A Standard, a vendor must
 report the number of transactions per second (TPS) under certain conditions
 and also must show the cost per TPS (K$/TPS), based on the price of the
 system.  
   The table below shows Digital's results, along with the results reported by
 Hewlett-Packard using the same test on their HP 3000 Systems running
 ALLBASE/SQL software. The figures for VAX DSM running on the VAX 4000 Model
 300 represent the lowest cost/TPS in the industry.  

   -----------------------------------------------------------------
   CPU Model                     TPS*                     K$/TPS**

   VAX 9000 Model 210           143.4	                   $29.3K
   VAX 4000 Model 300            41.4	                   $24.1K
   
   HP 3000 Series 960            38.2                       36.5
   HP 3000 Series 949            32.2                       29.3
   HP 3000 Series 922             7.7                       35.5

   * TPS refers to the tpsA-Local performance metric, in accordance with the
 TPC Benchmark A Standard for a local area network configuration.
  ** The K$/TPS measurement is the purchase price plus the five-year
 maintenance cost of the tested system, divided by the measured TPS rate.  
   -----------------------------------------------------------------
   TPC-A is modeled on a simplified banking application. It requires that 85%
 of the tested transactions belong to accounts at the branch where the teller
 is working (the local area network), while the other 15% belong to other
 branches. For each transaction, the teller's terminal sends data to the system
 being tested and receives data back. The system under test must guarantee that
 all completed transactions are retained after recovery from any failure of a
 component such as the CPU, memory, or disk.  
   VAX DSM is the only MUMPS product in the industry to support the properties
 of transaction processing, as required by the TPC-A Standard. "Until a few
 years ago MUMPS was familiar primarily to those working in the health care
 field, but now we see MUMPS being used to develop commercial applications as
 well, says J. Barry Herring, manager of the DSM Product Group. "These
 benchmark results show that VAX DSM systems are capable of supporting high
 performance interactive database applications for any industry."
   Founded in 1988, the TPC is establishing standards and guidelines to measure
 performance among the various computer hardware and software vendors who
 provide transaction processing systems.  
 For more information, contact the DSM Product Group at DTN 297-2372.
                     [Courtesy of Inside Contact]
776.12MUMPS - The Rodney Dangerfield of languagesDSM::CRAIGNice computers don't go down :-)Fri Nov 02 1990 23:1826
    re: .6
    >>And Mumps is faster than our flagship database and TP monitors...I
    >>wonder if the DSM folks have stopped laughing yet.
    
    We're not laughing.  We simply want our product to be accepted on its
    own merits as a quality implementation of MUMPS.  
    
    We appreciate the fact that Digital is finally beginning to realize how
    much business MUMPS generates for DEC (several hundred million dollars,
    annually), and that within certain markets (not just healthcare,
    contrary to popular myth) it's very important to DEC.
    
    We recognize (and have no problem with) the fact that Rdb is regarded
    as a strategic product.  By the same token, we're glad that DEC is
    beginning to recognize that DSM has major customers running high-end
    production systems today, and that these customers are an important
    source of revenue for the corporation.  We also think there are
    development opportunities where DSM is a better fit for the customers
    needs than the standard set of products.
    
    We *do* have a problem with the DEC sales rep who encounters a MUMPS
    system and reacts in knee-jerk fashion with "why don't you use Rdb?",
    rather than trying to understand why the customer selected MUMPS.  They
    don't want to migrate to Rdb (or Oracle, or Ingres, etc.), they just
    want a better MUMPS!
    
776.13DSM::CRAIGNice computers don't go down :-)Fri Nov 02 1990 23:478
    One last point, then I'll shut up...
    
    If a system is sold because of Digital software, whether its Rdb, DBMS
    or DSM,  then Digital wins.  So why is this note in the
    RDB_VMS_COMPETITION notes file?  DSM's not in competition with Rdb, any
    more than DBMS is.
    
    						Bob
776.14V4 is BETTER!!!!OFFPLS::HODGESTue Nov 06 1990 18:1817
    I can speak for one benchmark that I've worked on.  V4 is significantly
    better than V3.1 both in terms of raw CPU times and in I/Os.  Across
    a total benchmark of some 60 queries, the application ran faster
    and used fewer I/Os with V4 than with 3.1.  There were a few queries,
    probably less than 8 of the 60, which ran slower (probably because of
    the overhead of the new optimizer) but the WHOLE APPLICATION improved
    a great deal.
    
    This benchmark however was done AFTER (ie, with a later fieldtest
    version!) than the TPC-A numbers.  What we see in TPC-A may reflect
    the early version (I believe this to be the case!) or could be related
    to the new optimizer.
    
    I believe more tests are underway.
    
    Maryann
    
776.15Hardware EnvironmentTRCA03::MCMULLENKen McMullenTue Nov 06 1990 19:485
    Maryann,
    
    What was your hardware environment?
    
    Ken
776.168700 & BIG disksOFFPLS::HODGESWed Nov 07 1990 15:159
    Hi Ken! 
    
    How are things?  HW was an 8700 class machine with big disks and data
    base spread over 6 of them.  I believe we had enough memory that memory
    was NOT an issue; BTW, this was a read-only, research environment
    benchmark.
    
    MAH
    
776.17IBM, Unisys to redo TPC-AAKOCOA::HAGGERTYGIA EIS/SWS, Acton MA.Wed Mar 13 1991 16:3974
                **************************************
                        COMPUTER STORAGE NEWS
                **************************************

-------------------------------------------------------------------------
             Competitive Tracking of the Storage Industry
             
Digital Storage Custom Edition           Shrewsbury Research Library
Week Ending 3/7/91                       Carole Piggford, editor
Volume 7 Issue 9                         DTN 237-3271 MEMORY::PIGGFORD

          Full copies of articles are available upon request
-------------------------------------------------------------------------

                        TABLE OF CONTENTS
.
.
.
                               IBM . . . . . . . . . . . . .   21

                       INDUSTRY REACTIONS  . . . . . . . . .   21
IBM and Unisys have been requested to redo their benchmarks
     by the Transaction Processing Performance Council,
     because the firms took advantage of low-cost storage
     media i.e, tape, when they performed their transaction
     processing benchmarks.  . . . . . . . . . . . . . . . .   21
.
.
.
                        ******************
                               IBM
                        ******************

INDUSTRY REACTIONS

IBM and Unisys have been requested to redo their benchmarks by
the Transaction Processing Performance Council, because the firms
took advantage of low-cost storage media i.e, tape, when they
performed their transaction processing benchmarks.  A
spokesperson for the organization, Kim Shanley, said, "Up to now,
everyone used hard disks.  IBM and Unisys used tape drives, which
cost less, and, as a result, had a better price/performance
ratio."


                                                                         Page 28


The council is considering the transgressions "an honest mistake"
and is giving the firms 90 days to rerun tests.  Shanley said
that the council's specifications required eight hours of on-line
storage and the storage has to be randomly accessed within one
second.

Unisys is going to announce its results on its mid-range and
high-end Micro A machines on Wednesday, but the firm would not
say which computers were the target of the council's vote. 
Although, Unisys' In-House Systems Design Consultant, Frank
Stephens, said what the firm has published on its low-end Micro A
and U6000 series so far has followed the council's procedures on
data storage.

A spokesperson from IBM said that the complex TPC/A structure is
ambiguous and that the firm was not attempting to be misleading. 
IBM will reprice its results, with the disk price additions
having a minimal impact on total price - perhaps 5%.

Unisys would not make available its disputed benchmarks.  IBM's
disputed benchmarks were 32 tps and $20.4K per transaction on a
LAN for its RS/6000 Model 550.  Three other models were cited as
well. (CW,3/4/91,p7)



776.18Sun and TPC-ABAHTAT::DODDgone to Helen's landFri Apr 03 1992 18:1512
    In DEC USER (a independant UK magazine) Sun are advertising their
    servers. The claim?
    
    "Sun's SPARCserver 690MP, running Sybase, topped the TPC-A benchmark
    league against every other UNIX or other server tested.
    With 960 terminals, processing typical on-line transactions, the
    Sun/Sybase solution still achieved over 95 transactions per second"
    
    Mail circulating at the moment says we have better TPC-A than 95.
    We seem to emphasise price performance.
    
    Andrew
776.19Which Digital platform?COOKIE::OAKEYThe Last Bugcheck - The SequelFri Apr 03 1992 18:3712
�           <<< Note 776.18 by BAHTAT::DODD "gone to Helen's land" >>>
�                               -< Sun and TPC-A >-

�    Mail circulating at the moment says we have better TPC-A than 95.
�    We seem to emphasise price performance.
    
Andrew,

But the "we" figures that you're mentioning, are they on VMS or Ultrix?  In 
a quick look, I didn't see Ultrix figures that were better than Sun's but 
there are VMS tps-A figures which are...

776.20The SUN TPC -A & -B resultsMRKTNG::SILVERBERGMark Silverberg DTN 264-2269 TTB1-5/B3Fri Apr 03 1992 20:32212
-----------------------------------------------------------------------------
                                                        The Florida SunFlash

            SPARCserver 690MP TPC-A Benchmark Results

SunFLASH Vol 39 #26					          March 1992
-----------------------------------------------------------------------------

           Database Benchmark Results: TPC-A and TPC-B
  SPARCserver 690MP Delivers Outstanding Database Performance
             and Industry-Leading Price/Performance

This message announces 3 new benchmark results:

      + TPC Benchmark A (tm) on SS690MP and SYBASE SQLServer 4.8.1
      + TPC Benchmark B (tm) on SS690MP and SYBASE SQLServer 4.8.1
      + TPC Benchmark B (tm) on SS2     and SYBASE SQLServer 4.8.1

Results of these benchmarks reinforce Sun's leadership position
in the database server market, and demonstrate for the first time
SPARCserver capabilities for high terminal connectivity.

A First For Sun
===============

This is the first time ever that Sun has published results for
the TPC Benchmark A (tm).  The price/performance result for the
SPARCserver 690MP is the BEST EVER posted by *any* vendor on the
TPC-A benchmark.

Results of this benchmark clearly demonstrate the superior price-
performance and high levels ofdatabase performance attainable on
Sun's servers when supporting 100's of user terminals in an OLTP
environment.

The SPARCserver 690MP TPC-B results are the best results to date
by Sun.  This is the first time Sun has done the TPC-B Benchmark
on a SPARCserver 690MP.

Results-at-a-Glance:
====================

A quick summary of the new TPC results:

      +-------------------+-----------+-------------+--------------+
      | System		  | Benchmark |     TPS	    |    $/TPS     |
      +-------------------+-----------+-------------+--------------+
      | SPARCserver 690MP |   TPC-A   |  95.4 tpsA  |  $8,836/tpsA |
      |                   |           |             |              |
      | SPARCserver 690MP |   TPC-B   | 134.9 tpsB  |  $2,779/tpsB |
      | SPARCserver 2	  |   TPC-B   |  62.1 tpsB  |  $2,294/tpsB |
      +-------------------+-----------+-------------+--------------+

The rest of this message contains complete configuration
deatils, and competitive comparisons.

    TPC Benchmark A  & B (tm) on SPARCserver 690 MP
	& TPC Benchmark B (tm) on SPARCserver 2
		with SYBASE SQLServer 4.8
    ===============================================

Summary of results TPC-A:
=========================
				    TPS
Company/System			(tpsA-local*)	$/TPS
--------------			-------------	-----
Sun SPARCserver 690MP		95.41 tps	$8,836/tpsA

    (*tpsA-local : TPC-A (tm) transactions per second.  TPC-A can
		   be run in either Wide or Local Area Network)

Competitive TPC-A Results:
==========================

The SPARCserver 690MP delivers much higher performance, connect-
ivity, and better price/performance than other servers in its class:

Company/System			TPS (tpsA)	$/tpsA
---------------			---------	------
HP 9000 857S			60.1 tps	$16,459/tps
HP 9000 877S			74.9 tps	$15,813/tps
IBM RS6000/950			48   tps	$24,069/tps
IBM RS6000/560			72   tps	$12,671/tps
DEC VAX 6000-610		83.6 tps	$12,922/tps

The SPARCserver 690MP delivers much better price/performance than
even the highest-performance database servers:

Highest Performance Systems Per Company (TPC-A @ 2/92)
======================================================
Company/System			TPS (tpsA)	$/tpsA
---------------			----------	------
Unisys A16-61E			272.5 tps**	$43,190/tps
Sequent Symmetry S2000/750	214.5 tps	$18,507/tps
HP 9000 Series 870S/400		173.2 tps	$14,820/tps
Digital VAX 9000-210		143.4 tps	$24,191/tps
Sun SPARCserver 690MP		 95.4 tps	$ 8,836/tps***
IBM RS6000/560			 72   tps	$12,671/tps

	( **Highest tpsA posted to date)
	(***Best price/performance posted to date)

The SPARCserver 690MP delivers much higher performance than
the closest price/performance competitors:

Best Price/Performance Systems Per Company (TPC-A @ 2/92)
=========================================================
Company/System			TPS (tpsA)	$/tpsA
---------------			----------	-------
Sun SPARCserver 690MP		95.4 tps	$ 8,836/tps
DEC MicroVAX 3100/80		27.9 tps	$10,166/tps
Bull DPX/2 380 (2 cpu)		40.95 tps	$10,955/tps
DG Aviion 5225/AV4320		50.9 tps	$11,496/tps
HP 9000 Series 817S		51.2 tps	$11,830/tps
IBM RS6000/560			61 tps		$14,166/tps

About the benchmark:
====================
TPC Benchmark A exercises the system components necessary to
perform tasks associated with that class of online transaction
processing (OLTP) environments emphasizing update-intensive
database services.  Such environments are characterized by:

	* Multiple on-line terminal sessions
	* Significant disk input/output
	* Moderate system and application execution time
	* Transaction integrity

The TPC Benchmark A workload is intended to reflect an OLTP
application in which transactions originate at terminals.  Each
terminal contributes no more than 1/10th of a tpsA.  Thus a
performance metric of 95+ tps requires that the workload be
equivalent to that generated by at least 960 terminals.

The metrics used in TPC Benchmark A are throughput as measured
in transactions per second(tps), subject to a response time
constraint; and the associated price-per-tps.

TPC Benchmark A can be run in a wide area or local area network
configuration.  The throughput metrics are "tpsA-Local" and
"tpsA-Wide" respectively.  The wide area and local area
throughput and price-performance metrics are different and
cannot be compared.

For more details about this benchmark please refer to the
"TPC Benchmark A (tm) Full Disclosure Report Using Sun
Microsystems SPARCserver 690MP and SYBASE SQLServer 4.8"
available from your local Sun representative.

System Configuration:
=====================
	SPARCserver 690MP Model 140 with 4 Processors, 128MB memory
	SPARCstation 2, 96MB memory
	SPARCstation ELC, 64MB memory
	30 Emulex P4032 ethernet terminal concentrators,
	960 Liberty Freedom 1 terminals

Software Configuration:
=======================
	SunOS 4.1.2 / SunDBE 1.2
	SYBASE SQL Server 4.8.1

Results of TPC Benchmark B (tm) on SPARCserver 690 MP
=====================================================

Company/System			TPS (tpsB*)	$/TPS
---------------			----------	------
Sun SPARCserver 690MP		134.9 tps	$2,779/tpsB

	(*tpsB : TPC Benchmark B transactions per second )

System Configuration:
=====================
	SPARCserver 690MP Model 140 with 4 Processors, 128MB memory
	SPARCstation ELC, 8MB memory

Software Configuration:
=======================
	SunOS 4.1.2 / SunDBE 1.2
	SYBASE SQL Server 4.8

Results of TPC Benchmark B (tm) on SPARCserver 2
================================================

Company/System			TPS (tpsB)	$/TPS
---------------			----------	------
Sun SPARCserver 2		62.11 tps	$2,294/tpsB

System Configuration:
=====================
	SPARCserver 2, 32MB memory
	SPARCstation ELC, 8MB memory

Software Configuration:
=======================
	SunOS 4.1.2 / SunDBE 1.2
	SYBASE SQL Server 4.8

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
For information send mail to [email protected].
Subscription requests should be sent to [email protected].
Archives are on solar.nova.edu, paris.cs.miami.edu, uunet.uu.net,
src.doc.ic.ac.uk and ftp.adelaide.edu.au

All prices, availability, and other statements relating to Sun or third
party products are valid in the U.S. only. Please contact your local
Sales Representative for details of pricing and product availability in
your region. Descriptions of, or references to products or publications
within SunFlash does not imply an endorsement of that product or
publication by Sun Microsystems.

John McLaughlin, SunFlash editor, [email protected]. (305) 776-7770.
776.21LeapfrogCOOKIE::BERENSONLex mala, lex nullaMon Apr 06 1992 19:5912
This is, as you'll note, a game of "leapfrog".  Our TPC-A numbers stood
as the "best" for a time, then others challenged.  Sun/Sybase appears to
have been able to claim leadership as of public February results.
However, we've submitted new results which beat the Sun/Sybase results.
I don't have them all handy, except that I know our 3100-80 price/performance
is now $7687.00, becoming the first TPC-A system to drop below $8,000 per
TPS (I think that was just a matter of configuration repricing).

So, keep an eye out for publication of more VAX Rdb/VMS numbers that take
a leadership position.

Hal
776.22More pricing creativityTPSYS::SHAHAmitabh Shah - Just say NO to decaf.Wed Apr 08 1992 01:089
I should add to Hal's note that most of the recent price/performance records
have been due to creativity in pricing issues rather than real tp performance
improvements (like our October announcement with Rdb 4.1).

In the current TPC-A price structure, the terminals and terminal servers account
anywhere from 30% to 70% of the price/performance figures!! This is due to the
fact that TPC-A requires too many terminals to be priced with the system.
Hopefully TPC-C will correct this problem to a certain extent (for the same
platform, TPC-C will require 70-80% fewer terminals than TPC-A).