| In this week's Digital Review (February 5, 1990) is a letter from
a (evidently) loyal customer, Gary V. Steeves, President, GWA
Information Systems Inc., of Waltham, attacking DR's reporting of
performance, capacity and cost-per-user claimed by HP for HP's 870S/200.
I get the feeling, reading the letter, that Steeves loves his VAX.
He "supports 70 users on a MicroVAX 3900" with an RMS/FMS application,
"which is highly CPU and I/O intensive." He estimates that on a
VAX 9000 210 he could support 800 users, and even for 640 users would
be less expensive than the 870S/200.
In reply, HP's Douglas J. Gibson, Product Line Manager, says "[t]he
number of users...was the 'typical' number of users..."
My point in entering this here is that we need to come out in support
of TPC-A, as Gibson goes on to say "[b]ecause cross-vendor comparisons
are often difficult, HP is fully supportive of a common standard in
industry benchmarks such as TPC-A (Transaction Processing
Council-Benchmark A).
And later, "We encourage [DEC, IBM and others] to begin using the
the TPC-A benchmarks, so customers will be able to make a more accurate
systems comparison."
The new_863_script.txt says we are founding members of the TPC and
our "adherance to standards is more than lip-service. It is a
commitment to both drive and follow evolving industry standards."
What is the current status of that "drive and follow"-ing?
Can we have the answers to Paul's excellent questions?
|
| From: TPS::KOHLER "WALT KOHLER, TPSG/PAMS/ECPP, TAY1-2, TAY1, (508) 952-4428 09-Feb-1990 1721" 9-FEB-1990 17:42:35.94
To: TROA01::NAISH
CC: KOHLER
Subj: RE: Help on TPC-A
Paul,
Here are the answers to your questions. You are welcome to put them in a
notes file. I have also attached a copy of a Sales Update article that I
wrote that describes the differences between TPC-A and DebitCredit. This
article will appear in the DECtp II Sales Update Special Issue. You are
welcome to put this in the notes file as well.
Walt Kohler
===========================================================================
Questions/Answers
1. If the TPC-A Benchmark has been published? If so, where a
copy can be obtained?
The "TPC Benchmark (tm) A Standard Specification" was published by
the Transaction Processig Performance Council (TPC) on 10 November
1989. It is not available in electronic form at this time, but a
hardcopy version can be obtained by sending an e-mail request to
Hwan Shen (TPS::SHEN).
2. Do we have a time frame when we will be reporting TPC-A
performance?
We expect to publish our first TPC Benchmark (tm) A results in
March 1990. TPC-A will replace DebitCredit for TP product
positioning. You can obtain a copy of DEC's TPC-A results and full
disclosure reports by sending a request to Hwan Shen (TPS::SHEN).
==============================================================================
NEW TPC BENCHMARK, TPC-A VS. DIGITAL'S DEBIT/CREDIT (Revised 1/23/90)
Walt Kohler
DTN 227-4428
TAY1/2
-------------------------------------------------------------------------
| |
| o TPC-A is the new standard for TP performance and price/performance |
| |
| o It is based on end-to-end system price and performance |
| |
| o It is a strict standard with a full disclosure report requirement |
| |
-------------------------------------------------------------------------
This article briefly describes the new benchmarking standards developed by
the Transaction Processing Performance Council and adopted by Digital to
replace its Debit/Credit benchmarking metrics. It includes a brief
comparison of the two standards, and contrasts the anticipated effect it
will have on Digital and its competitors.
What is TPC-A
=============
TPC is the Transaction Processing Performance Council, founded in 1988 by
Omri Serlin to develop transaction processing benchmark standards. Digital
is one of 34 members of TPC.
TPC Benchmark (tm) A (TPC-A) is the first TPC standard. It was approved in
November 1989.
o It is derived from the Debit/Credit benchmark, as described in
DATAMATION, April 1985, and uses the same four database relations:
- Account
- Branch
- Teller
- History
o It uses a single basic transaction type:
- read 100 byte message from terminal
- BEGIN_TRANSACTION
update Account, Branch, Teller
write History
- END_TRANSACTION
- write 200 byte message to terminal
o It uses two metrics:
- Performance: TPS (transactions per second)
- Price/Performance: K$/TPS (5 yr. HW/SW and maintenance/TPS)
o Two variations are defined:
- TPC-A local (transaction throughput specified using the
unit tpsA-Local), when a LAN is used to connect terminals.
- TPC-A wide (transaction throughput specified using the
unit tpsA-Wide), when a WAN is used to connect terminals
o It stresses many parts of a TP system
Why did we switch
=================
Digital has decided to adopt the TPC-A benchmark standard, replacing the
Debit/Credit benchmark for the following reasons.
o The original Debit/Credit specification was too loose. It lead
to wide variations in reported TPS and K$/TPS because of wide
variations in interpetation by different vendors. Thus, it was
difficult to compare the performance of vendors systems based on
reported Debit/Credit results.
o The strict TPC-A benchmark standard, with its full disclosure
requirement should lead to more comparable and reliable results.
o Digital encourages and supports industry standards.
What are the differences between Digital's Debit/Credit and TPC-A
=================================================================
The following table summarizes the differences and similarities between
Digital's implementation of the DebitCredit benchmark and the new TPC-A
benchmark standard.
Issue Digital's DebitCredit TPC Benchmark A Impact
=============================================================================
Transaction Profile Update A,T,B balance; Update A,T,B balance; -TPS
Write H with timestamp Write H with timestamp;
Return new A balance
Terminal I/O 100 bytes/200 bytes 100 bytes/200 bytes none
-----------------------------------------------------------------------------
ACID Properties Verification not Verification required +Test
required Time
-----------------------------------------------------------------------------
Logical Database ATB 100 bytes/record ATB 100 bytes/record None
Design/Sizing H 50 bytes/record H 50 bytes/record
-----------------------------------------------------------------------------
Scaling Rules A (Account) 100,000 A (Account) 100,000 None
(records per TPS) T (Teller) 100 T (Teller) 10
B (Branch) 10 B (Branch) 1
(terminals per TPS) Terminals 100 Terminals 10 +TPS
-----------------------------------------------------------------------------
Distribution and 85/15 Rule; 85/15 Rule; None
Partitioning Visible data Visible horizontal
partitioning allowed partitioning allowed;
vertical partitioning
not allowed.
-----------------------------------------------------------------------------
Response Time 95% under 1 second; 90% under 2 seconds; +TPS
Constraint and
Measurement Measured at Front End Measured at Remote -TPS
Emulator Terminal Emulator
-----------------------------------------------------------------------------
Duration of Test Sustainable, steady- Sustainable, steady- None
state, reproducible state, reproducible
performance performance; 15 minute
minimum test time.
-----------------------------------------------------------------------------
SUT, Driver, and SUT is equipment in SUT is all equipment, +K$
Communications machine room including Front End
(FE) systems, terminals,
and communications
Driver emulates Front Driver emulates -TPS
End systems (FEE) terminals (RTE)
Test in LAN Test LAN (TPC-A local) +Test
or WAN (TPC-A wide) Time
-----------------------------------------------------------------------------
Price per TPS Price HW/SW components Price all HW/SW +K$
in the machine room components (except
(communications FE communications lines)
but not terminal FE required in a real
or terminals) and configuration and 5
5 year HW/SW maint. year HW/SW maintenance.
SW development licenses.
Price storage for 90 Price storage for 90 +K$
day History data day History data and
access overhead
Price storage for Price storage for +K$
recovery log for recovery log for
1 hour test 8 hours of operation
-----------------------------------------------------------------------------
Full Disclosure Conformance checklist Conformance checklist, +Test
and line by line results of ACID tests, Time
pricing performance curves,
program listings,
settings of all tunable
parameters, driver
scripts, line by line
pricing, etc.
------------------------------------------------------------------------------
Audit Internal and External Comprehensive external +Audit
audits. audit recommended. Time
==============================================================================
_____________________________________________________________
| +TPS | TPS performance will be increased |
| -TPS | TPS performance will be decreased |
| +K$ | System price will be increased |
| +K$/TPS | Price/performance will be increased |
| +Test Time | Time to carry out the benchmark will increase|
| +Audit Time| Time to externally audit the benchmark will |
| | increase |
-------------------------------------------------------------
What is the expected impact on TPS performance? Price/Performance?
==================================================================
On Digital
- TPS is expected to increase 5% to 25% on a specific hardware/
software platform due to the relaxed response time constraint
(90% under 2 seconds). However, response time is now measured at
the terminal and includes terminal to system communication time
and front end processor time. Low-end systems will see the
most improvement.
- Price (K$) is expected to increase 25% to 35% due to the new
definition of the System Under Test. The entire end-to-end
system must now be priced, including terminals, communications,
and front end processors.
- Price/Performance (K$/TPS) is expected to increase between 5% and
30%, since the increase in price will be greater than the
increase in performance.
On our competion
- TPC-A is not expected to put Digital in a less favorable
competitive position.
- Some techniques believed to have been used by our competition to
achieve high performance are no longer allowed, e.g., relative
record number addressing and avoidance of database checkpointing
during the measurement interval.
|