[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference ulysse::rdb_vms_competition

Title:DEC Rdb against the World
Moderator:HERON::GODFRIND
Created:Fri Jun 12 1987
Last Modified:Thu Feb 23 1995
Last Successful Update:Fri Jun 06 1997
Number of topics:1348
Total number of notes:5438

130.0. "DB2 and simple transactions!" by PANIC::ENGLAND (Ken England) Tue May 10 1988 12:12

    Apologies if this is discussed elsewhere in the notes file and I
    missed it! 
    
    A read an article recently in 'Comnputing' about DB2. IBM quote
    in excess of 400 simple transactions per second and a lower figure
    of complex transactions per second. Does anyone understand this
    metric? What do they mean by simple and complex?
    
    Ken
T.RTitleUserPersonal
Name
DateLines
130.1But that makes DB/CR VERY complex !WARSAW::JACKSONTony Jackson (WARDER::JACKSONT)Tue May 10 1988 13:0216
Bonjour Ken,

I too saw that article. I have grave doubts about the figures as I have
previously heard Debit/Credit style figures of around 47 TPS for DB2.
That makes Debit/Credit somewhere around 5 times as complex as the quoted
'complex' transaction. I think that gives us a clue as to what they
mean by 'complex'. I cannot conceive of anything as simple as the implied
'simple' transaction.

To look at it another way, if they've managed to improve performance
by several hudred percent I think we may have heard more than a passing comment
from IBM marketing.

Yours cynically,
Tony J.  
                                              
130.2Too SimpleQUILL::BOOTHA Career in MISunderstandingThu May 12 1988 20:354
    The simple transaction was defined as a "look-up" transaction. That
    sounds awfully much like a read transaction to me.
    
    ---- Michael Booth
130.3Any more info ?PANIC::STOTTORChris Stottor, City of London SWASFri May 13 1988 15:1817
    
    Do we know anything about the size of this mythical database ? Or
    do we know whether all 400 TPS's were dealing with all the records
    in the database, or just a subset of them, so that 399 of them might
    conveniently not need to touch a disc ?
    
    These figures may be meaningless, but they do seem to stick in
    potential customers' minds unfortunately...
    
    Why is it that customers (and maybe the vendors too) are obsessed
    with the TPS metric, when there are so many other things that make
    a product good/bad/suitable ? Are they (we) the sort of people who would
    buy a Skoda if the manufacturers said it did 150 mph ??

    Regards.

    
130.4Small printPANIC::STOTTORChris Stottor, City of London SWASFri May 13 1988 15:205
    
    (Of course the small print on the Skoda's figures would say that
    this was achieved driving vertically down a cliff with a 149 mph
    tail-wind...)
    
130.5footnoteSNOC01::ANDERSONKThe wino and I know ...Tue May 17 1988 08:597
    Or,
    
    With a footnote that said that all speeds mentioned were actually
    only peak values achieved by any single component (ie it was actually
    only one wheel that was doing 150, the rest of the car was doing
    55). A rolling wheel is fast, but do you want to buy a wheel, or
    a 3 wheeled car?
130.6thanks guys!42208::ENGLANDKen EnglandThu May 19 1988 12:267
    
    Thanks for the reply guys - the replies on the Skoda were particularly
    useful in a high performance environment!
    
    Ken
    
    PS: Isn't a Skoda the FASTEST car on TWO wheels..
130.7some official IBM text might helpCREDIT::FOLDEVIThu May 26 1988 18:5549
    
    Some text from the "Programming Announcement, IBM DATABASE 2 (DB2)
    Version 2" from IBM:
    
    On "Performance Enhancements":
        
    "Transaction: ... Laboratory performance measurements on a 3090-600E
    (detailed below) using DB2 Version 2 have demonstrated:
    
    . A High Volume Transaction Processing workload using DB2 with IMS/VS
    Fast Path as the Data COmmunication front end:
    
     - 438 transactions per second at 85.1% CPU utilization (ESA) with
    an average transit time under 1 second for a credit authorization
    (lost/stolen card check)
    
     - 300 transactions per second at 84.1% CPU utilization (ESA) with
    an average transit time under 1 second for debit processing.
    
    . The standard DB2 workload with IMS/VS as the Data Communication
    front end:
    
     - 186 tps at 89.8% CPU utilization (ESA) with an average transit
    time under 1 second using Wait For Input transactions.
    
     - The DB2 Version 2 (ESA) capability is a measured improvement
    of 51% over DB2 1.3 (XA), which is 123 tps at 91.4% CPU utilization.
    
    The MVS/ESA performance numbers represent up to a 13% capacity
    improvement over DB2 Version 2 with MVS/XA. The most significant
    improvements will be seen in High Volume Transaction environments."
    
    On "Measurement Environment":
    
    "Transaction performance measurements were executed on a 3090-600E
    configured with 256MB of Real Storage and 256MB Expanded Storage."
    "All measurements were performed with sufficient I/O devices to
    remove I/O as a constraint. Transit time is defined to be the time
    from when the message enters the input queue through the time it
    leaves the output queue, including processing time. The measurements
    were performed using MVS/XA 2.2 (or MVS/ESA Version 3 as indicated),
    DFP 2.3, IMS 2.2, ACF/VTAM 3.1.1 and DB2 Version 2 with IRLM 1.5
    or DB2 1.3 with IRLM 1.4.  Projected DB2 improvements are based
    on a path length model. The model assumes approximately 1 million
    records (100 bytes in length) with three or less indexes."
    
    I think that's enough for now.  Hope this tells you something.
     
    
130.8IBM => Another vendor publishes meaningless numbersBANZAI::BERENSONRdb/VMS - Number ONE on VAXFri May 27 1988 14:5917
Let's take some of this in perspective.  The database was only 100MB and
was run on a 512MB machine.  Thus even assuming 100% overhead for free
space and indices the entire database could fit in memory and, for the
simple transaction, no I/O at all was required. Even for the more
complex transactions, the only I/O required were database writes and
journaling. 

Also remember that we are talking about a processor-complex up in the 75
MIP range (if my memory isn't failing me).  IBM is just trying to
demonstrate the capability of their database system vis a vi the CPU.

Hal

Ps: A DebitCredit benchmark at 150 TPS would require 1.5GB for user data
which overhead would push into the 3GB range.  Thus, one could expect
very different performance than IBM reports for "Debit Transactions" if
they used the real DebitCredit benchmark.