[Search for users]
[Overall Top Noters]
[List of all Conferences]
[Download this site]
Title: | DEC Rdb against the World |
|
Moderator: | HERON::GODFRIND |
|
Created: | Fri Jun 12 1987 |
Last Modified: | Thu Feb 23 1995 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 1348 |
Total number of notes: | 5438 |
813.0. "Oracle TPC-B Results?" by SNOC02::BRIFFA (Mark Briffa - EIS Sydney) Fri Nov 30 1990 02:35
Hi,
This document came to Digital via internet and has apparently
been distributed randomly. Any comments would be appreciated!
All the DEC and Internet addresses have been removed.
It seems to be incomplete with regards to TPC-B requirements.
Thanks,
Mark
ORACLE CORPORATION
DEC Products Division
+--------------------------+
| |
| VAX 6000-500 benchmark |
| Performance Report |
| |
+--------------------------+
November 16, 1990
1. SUMMARY
----------
The Product Line Marketing Engineering group of Oracle's DEC Products
Division recently completed a preliminary TPC-B benchmark of the new VAX
6000 Model 500 at Digital Equipment Corporation in Boxborough MA. The
benchmark was conducted with the support and sponsorship of the MSB System
Performance Analysis Group, MSB Product Engineering, and MSB Marketing.
As the results below indicate, the benchmark was a success:
* ORACLE delivered a RECORD 168 TPS on the six-processor VAX 6000-560
(VAX 6560), more than doubling the performance turned in by its
predecessor, the VAX 6000-460.
* ORACLE and the VAX 6000-560 push price/performance to a new low of
$14.5K / TPS, cutting in half the cost per transaction found on
the VAX 6000-460 ($28K / TPS).
* Results on one-, two- and four-processor configurations demonstrate
ORACLE's scalability on Digital's VAX SMP architecture.
Highlights:
MEMORY
-------------
CPU TPS DB SIZE TOTAL % UTIL K$/TPS
---- ------ -------- ------ ------ ---------
6560 168.2 2207 MB 256MB 42.2% 14.5
6540 125.5 1597 MB 128MB 79.8% TBD
6520 72.5 987 MB 256MB 30.0% TBD
6510 38.8 498 MB 256MB 28.4% TBD
The sections that follow describe the benchmark, discuss our performance
results in greater detail, provide price/performance information, and
identify technical issues that require follow up with Digital.
2. BENCHMARK DESCRIPTION
------------------------
Results reported here are based on a transaction run against ORACLE Version
6.0.30.5, VMS Version 5.4, and the VAX 6000-500 hardware configuration
described in the price-performance section at the end of this document.
The transaction tested complied with all of the requirements of the TPC-B
benchmark standard ("TPC BENCHMARK/tm B", as published by the Transaction
Processing Council) including:
* Transaction profile (clause 1). The benchmark transaction updated
branch, teller, and account balances, inserted a history record,
and returned the updated account balance with a select statement.
* Logical database design (clause 3). The benchmark database met all
record layout requirements and all requirements for internally
represented record size.
* Scaling rules (clause 4). The benchmark database complied with the
1:10:100,000 scaling ratios mandated by the TPC-B standard. The TPS rates
reported here do not exceed the TPS rate for which the database was
configured.
* Distribution, partitioning, and transaction generation (clause 5). All
driver processes submitted transactions against the entire range of
branches, tellers, and accounts and observed the 85% / 15% split between
local and remote branches. Teller ID keys for record retrieval were
generated at random (in fact, we had to work around a VAX C bug to meet
this requirement).
There were two exceptions to the standard as follows:
* Test duration requirements (clause 7.2). The reported test period
did not include a checkpoint.
* Redo log durability requirements (clause 2.5.3.1). Redo logs were not
configured as shadow sets.
These issues were not tested due to severe time constraints while testing in
Boxborough MA. Despite these two exceptions, we are confident that the results
reported here are very reliable and could be reproduced in an audited TPC-B
test. Our testing methodology observed all portions of the TPC-B standard that
have a major impact on performance. Remaining measures which are necessary to
bring the test into full compliance with the TPC-B standard will have virtually
no effect on the results reported. Specifically, a database checkpoint will
only have a slight impact on throughput for a short period of time, and volume
shadowing for redo logs will have no impact whatsoever on throughput.
3. BENCHMARK RESULTS & ANALYSIS
-------------------------------
The VAX 6000-500 establishes a new record for ORACLE performance on VAX/VMS
hardware platforms. Based on our tests, we beleive that the VAX 6000-500 is
potentially a very strong platform for ORACLE applications.
The table below shows TPS/VUP, overall scaling and marginal scaling for the new
VAX 6000-500 in comparison with the VAX 6000 model 300 and 400. Clearly, the
VAX 6000-500 delivers substantially more throughput per VUP than its
predecessor, the VAX 6000-400 even though we used a more expensive TPC-B
transaction on the new model 500. TPC-B transactions, in this case, are more
expensive than TP1 transactions due to the use of an additional select
statement and increased contention introduced by reducing teller and branch
table sizes by an order of magnitude). Unfortunately, there are no TPC-B
based numbers for ORACLE's performance on earlier VAX 6000 models.
As an approximation, one can assume that 1 TPC-B TPS is equivalent to 1.1
TP1 TPS and normalize the results. This casts the VAX 6000-500 in an even
better light; either the TP1 and TPS / VUP numbers reported for the 6300
and 6400 decrease by 9%, or the TPC-B and TPS / VUP results reported for
the 6000-500 increase by 10%.
VAX 6000-500 (TPC-B):
---------------------
SCALING
CPU TPS OVERALL MARGINAL TPS / VUP
---- ------ ------- -------- ---------
6510 38.8 100% 100% 2.99
6520 72.5 93% 87% 2.89
6540 125.5 81% 68% 2.56
6560 168.2 72% 56% 2.33
VAX 6000-400 (TP1):
-------------------
SCALING
CPU TPS OVERALL MARGINAL TPS / VUP
---- ------ ------- -------- ---------
6410 17.3 100% 100% 2.47
6460 79.1 76% N/A 2.19
VAX 6000-300 (TP1):
-------------------
SCALING
CPU TPS OVERALL MARGINAL TPS / VUP
---- ------ ------- -------- ---------
6310 13.5 100% 100% 3.55
6360 66.0 81% 64% 3.0
Notes:
Overall scaling compares the overall performance of a single SMP machine
configured with N CPUs as a fraction of the performance of N individual
single CPU machines.
Marginal scaling indicates the marginal gain from the Nth processor added
as a fraction of the performance of a single CPU machine.
4. ADDITIONAL FINDINGS
---------------------
We ran into two issues that require further investigation:
o Utilization of available memory
o ORACLE and the VMS 5.4 scheduler
These issues are discussed below.
4.1 ORACLE's MEMORY USAGE
VMS imposes a per-process limit on working set size, WSEXTENT, to control
memory usage. As in past benchmarks (specifically, our VAX 6000-400
testing), we found that the use of a WSEXTENT value larger than 64K induced
soft (I/O free) paging, despite the fact that 60% to 70% of physical memory
was sometimes TOTALLY unused.
This is a critical issue for ORACLE. One must be able to increase memory
usage as processor power and database size increase in order to maximize
ORACLE's performance. Despite the fact that database size increased by 450%
as we progressed from one to six processors, cache size remained fixed.
Evidence suggests that this did indeed have a performance impact:
Transactions in the six-processor test performed roughly 12% more I/O than
transactions in the single-processor test.
# CACHE ACCOUNT DB:CACHE DB I/O
CPU BUFFERS SIZE SIZE # ROWS RATIO PER TXN
--- ------- ------ ------- ------ -------- -------
6510 13600 26.6MB 488MB 4M 18 2.69
6520 13600 26.6MB 977MB 8M 37 2.86
6540 13600 26.6MB 1587MB 14M 60 2.92
6560 13600 26.6MB 2197MB 18M 83 3.00
4.2 ORACLE and the VMS SCHEDULER
The use of real-time priority levels had a dramatic effect on ORACLE's
performance. Our TPS rates improved by as much as 24% and idle CPU time
disappeared almost completely when we switched user and background
processes from priority 10, a timesharing priority level, to priority 16, a
real-time priority level. The effects of the switch were especially
dramatic for the four- and six-processor tests:
TIMESHARE REAL TIME TPS
CPU TPS IDLE CPU TPS IDLE CPU INCREASE
---- ----- --------- ----- -------- --------
6510 34.5 3.3% 38.9 2.2% 13%
6520 66.6 2.5% 72.5 4.0% 9%
6540 108.6 6.3% 125.5 0.3% 16%
6560 135.0 8.1% 168.2 1.6% 25%
We attribute these gains to the elimination of the process preemption that
takes place under timeshare scheduling in VMS 5.4.
VMS temporarily boosts the priority level of any user process for
which a direct I/O request completes by two levels, so that the process can
quickly perform I/O post-processing. Prior to VMS 5.4, this boost didn't cause
preemption: A computable process P1 could not preempt a currently executing
process P2 unless P1 had a priority at least three levels higher than P2.
As of VMS 5.4, however, this changes: A new SYSGEN parameter,
PRIORITY_OFFSET, now specifies the difference in priority required for a
computable process to preempt the current process. More important, the
default value for PRIORITY_OFFSET is 0, so high levels of I/O activity
under VMS 5.4 apparently induce much higher levels of preemption than under
previous versions of VMS.
Our use of real-time priority levels disabled this preemption mechanism.
Because neither the PRIORITY_OFFSET mechanism nor the priority boost for
I/O post-processing applies to processes at real-time priority levels, we
effectively returned things to status quo ante. We assume that it is possible
to duplicate these results at timesharing priority levels by setting
PRIORITY_OFFSET by setting PRIORITY_OFFSET to 3.
5. CONCLUSION AND FOLLOW-UP
---------------------------
Given its strong price-performance characteristics and its ability to
deliver more ORACLE throughput per rated VUP, Oracle's DEC Products Division
can strongly recommend the VAX 6000-500 to Oracle sales representatives and
their customers, particularly those customers who currently run ORACLE on the
VAX 6000-400.
In addition, the DEC Products Division recommends use VAX 6000-500 engines
in a VAXcluster configuration to benchmark Oracle's new support for
VAXcluster technology. We strongly believe that this test will demonstrate
that the combination of ORACLE and the VAX 6000-500 in a VAX cluster
environment can provide a very powerful solution to customers.
Finally, the VAX 6000-500 benchmark turned up several technical issues that,
if resolved, may lead to further performance gains on the VAX 6000-500 and
other VAX/VMS hardware platforms. We look forward to working with Digital
on the following items:
* Pursue high-end scaling issues with MSB Hardware Engineering.
* Establish a direct dialog between Digital's VMS Engineering and
Oracle's RDBMS Development to discuss OLTP performance issues on VMS.
* Investigate our VMS scheduler findings and the new PRIORITY_OFFSET
parameter.
* Follow up on the working set extent / ORACLE's memory usage issues.
6. PRICE-PERFORMANCE CALCULATIONS
---------------------------------
The full development cost calculation for ORACLE's TPC-B / VAX 6000-560
benchmark is estimated at $14.5K / TPS. By comparison, the full development
costs of the environment used in ORACLE's TP1 / VAX 6000-360 benchmark was
$28.47K/TPS.
TPC-B / VAX 6000-560 cost calculations included the following items:
* Hardware, software, and maintenance fees for a period of five years
for runtime and development software
* On-line disk storage for 30 8-hour days of history data
* On-line disk storage for 8 hours of redo log data
The priced configuration varies from the actual tested configuration as
follows:
* The configuration was priced with 128MB of memory; the tested configuration
had 256MB, but actual memory usage never exceeded 107MB.
Full Development Environment Cost Calculation ORACLE - VAX 6000 Model 560:
------------------------------------------------------------------------
Standard Total
Description Model# Price QTY Price
------------------------------------------------------------------------
6000-560 VMS SBB 208/60 65FMA-AE $849,688 1 $849,688
128 Mbytes of Memory MS65A-DA $115,500 1 $115,500
9.5 GB Stor. Array SA6550JA $228,056 1 $228,056
1.1 GB SABB SA70-MK $ 39,312 6 $235,872
0.56 GB SABB SA70-LK $ 22,028 1 $ 22,028
HSC Cluster Start PKG HSS70-BA $ 97,742 2 $195,484
HSC Disk/Tape Data Chnnl HSC5X-DA $ 12,250 7 $ 85,750
CI Interface CIXCD-AB $ 37,947 1 $ 37,947
CI Cable BNCIA-10 $ 675 1 $ 675
VAXCluster Software QL-VBRAV-AA $ 7,068 1 $ 7,068
TK70 Tape Drive TBK70-CA $ 10,814 1 $ 10,814
VAX C QL-015A9-JN $ 25,819 1 $ 25,819
VAX C Media/Doc QA-015AA-HM $ 913 1 $ 913
ORACLE RDBMS V 6 $213,000 1 $213,000
ORACLE tpo $127,800 1 $127,800
First Year Digital Hardware and Software Total $1,815,614
Digital Business Agreement Discount=(9%) $163,405
Cont.:
-----------------------------------------------------------------------------
Desc. Service Unit Delta 5 YR Total Price+Service
Cont. Level $/Month # MO. Service 5 YR Cost
-----------------------------------------------------------------------------
VAX6560 DSMC $1,667 48 $80,000 $929,688
128MB NONE $ 0 0 $ 0 $115,500
9.5GB DSMC $1,188 48 $57,024 $285,080
1.1GB DSMC $ 276 48 $13,248 $249,120
0.56GB DSMC $ 169 48 $ 8,112 $ 30,140
HSC70 Included $ 0 48 $ 0 $195,484
HSC70 DSMC $ 54 48 $18,144 $103,894
CIXCD DSMC $ 102 48 $ 4,896 $ 42,843
BNCIA none $ 0 48 $ 0 $ 675
VC S/W QT-VBRAA-HM $ 25 48 $ 1,200 $ 8,268
TK70 DSMC $ 72 48 $ 3,456 $ 14,270
VAX C QT-015A9-LN $ 91 60 $ 5,460 $ 31,279
VAX C NONE $ 0 0 $ 0 $ 913
ORACLE Std Support $2,663 60 $159,750 $372,750
TPO Std Support $1,598 60 $ 95,850 $223,650
T.R | Title | User | Personal Name | Date | Lines |
---|
813.1 | We do better! | IJSAPL::OLTHOF | Henny Olthof @UTO 838-2021 | Fri Nov 30 1990 08:35 | 7 |
| Contact Fran�ois Raab (COOKIE::RAAB) for a discussion on this results.
He and people in his group are very much involved in TP testing and
have gopod points to attack these results. Some of his comments:
- it's unaudited, not a full TPC-B test
- we score better with Rdb (waiting to release report)
Henny Olthof, TP-DB Holland
|
813.2 | Oracle again???? | RANCH::DAVIS | Intel Manufacturing SAM | Fri Nov 30 1990 18:08 | 19 |
| No User Load? Tuning the system as a batch database test....
Doesn't exactly match their scalability claims I have heard in the
past also.... They were claiming that you got more than 100%
improvement with the next processor in some cases....
Note how they had to boost priorities to make Oracle Scream....
would customers normally do this in a production environment?
Also, not that once again they point to problems with Digital products
(had to workaround VAX C). Not much mention about the performance gain
from TPO. Of course, it's impossible to run this WITHOUT TPO, right?
it's a game with these folks....and we play right along....
sigh..
|
813.3 | | WIBBIN::NOYCE | | Fri Nov 30 1990 19:12 | 15 |
| Re .2 - batch database test
That's OK, TPC-B is supposed to be a batch database test (a
standardized version of the old TP1). DEC should have results soon,
I hope! This is a different test from TPC-A, which compares to
the true Debit-Credit benchmark. (Have Oracle ever published TPC-A
results?)
Re .0 "performance won't change when we follow the rules"
Mirroring (shadowing) can make reads go faster, but slows down writes.
Since the log is write-only, I can't see how this can avoid hurting
their performance.
Similarly, taking a checkpoint is likely to write much of that 28MB
cache to disk -- sounds like more of an impact than "lowered throughput
for a short time" to me.
|
813.4 | Underwritten by Digital? | SNOC02::BRIFFA | Mark Briffa - EIS Sydney | Sun Dec 02 1990 23:51 | 26 |
| I have quite a few concerns about the document.
To be fair, it is a "preliminary" document.
It claims to be sponsored by our MSB groups which makes
Digital underwrite the credibility of the document. There is
an implication that everything is the document is *correct*
otherwise Digital would not have sponsored it, or perhaps,
Digital would have corrected it.
It attempts to scale (incorrectly) TP1 to TP-B thereby
attempting to validate all of Oracle's previous benchmarks.
This is a very dangerous precedent with harmful implications
for the TPC.
My main concern is, however, of just how rapidly the
document was distributed through out Digital in its present
format. It came in via Internet and there is no reason why
it would not be distributed across other networks.
It would be interesting to see if an audited full disclosure
TPC-B report would be distributed just as rapidly.
Regards,
Mark
|
813.5 | See next note for arguments | IJSAPL::OLTHOF | Henny Olthof @UTO 838-2021 | Mon Dec 03 1990 08:27 | 10 |
| As I stated in reply 1 (extract from the observations made by Fran�ois
Raab), the test did not include shadowing of journals, it did not even
pass the ACID test. There is therefore reasonable boubt wether O will
get the test accepted by the TPC.
As for our work in the field, we have sufficient amo tyo fight this
with the report Fran�ois compiled. I've listed that in the next reply.
Happy hunting
Henny Olthof
|
813.6 | Fran�ois Raab comments | IJSAPL::OLTHOF | Henny Olthof @UTO 838-2021 | Mon Dec 03 1990 08:29 | 88 |
| I N T E R O F F I C E M E M O R A N D U M
Date: 27-Nov-90
From: Francois Raab
Dept: DBS
DTN: 523-2878
Subject: Analysis of ORACLE's results on the VAX 6000-500 series
++++++++++++++++++++
DIGITAL CONFIDENTIAL
++++++++++++++++++++
In interpreting Oracle's recent "preliminary" results, serveral questions
come to mind.
Did they run TPC-B or TP1?
TPC-B is a very strict and well defined benchmark. To claim that you have
TPC-B results you must comply to ALL the requirements. Oracle did NOT run
TPC-B.
They ran something in between their own version of TP1 and the strict TPC-B
standard. The differences are as follow:
- no use of checkpointing
- no archiver process to copy redo logs
- no shadowing of journals
- no demonstration of passing ACID tests
Although Oracle claims otherwise, the absence of checkpointing can improve
performance by up to 10%. A more serious problem is the fact that they did
not pass the ACID tests. This potentially means that their RDBMS does not
currently comply with the ACIDity requirements defined by the TPC. This, in
itself, could explain why they have not yet released audited TPC-B numbers.
Not having shadowing will not improve performance, but it definitely
improves price/performance ratio.
How does Oracle's raw performance compare to Rdb/VMS?
On the single processor, Rdb/VMS is 12% faster than Orcacle. This is when
comparing Rdb/VMS TPC-B number (currently under audit) with Oracle's TP1
number.
A very important aspect of Oracle's setup is that they run their user
processes at priority 16. This is not a realistic setup for production. By
raising the priority to real-time, they gain up to 24% throughput.
How does Oracle's SMP scaling compare to Rdb/VMS?
When measuring SMP scaling, serveral "games" can be played. The game that
Oracle is potentially palying results in limiting the tuning of their
uni-processor configuration to have a lower base to scale from.
Here they used the same cache size on the 6510 than on the 6560. A smaller
cache size on the 6510 would have given slightly better performance (less
cache management overhead) but would have reduced their scaling. This is
also shown by the increase in I/O per transaction, from 2.5 (6510) to 3.0
(6560). In an optimal environment, I/O decrease at high throughput because
of better grouping. Here, there emphasis is SMP scalability.
To Oracle's credit is their use of global buffering, reducing I/O per
transaction, and their limited used of VMS locking. We have seen in prior
Rdb/VMS studied that the two major factors affecting SMP scaling are the
level of CPU cache misses and the spinlock activity. Having a single global
buffer reduces CPU cache misses. Doing less I/O and less VMS locking reduces
spinlock activity. The result is improved SMP scaling.
Not relying on VMS locking has given Oracle numerous delays in delivering
full cluster support. Buffering and locking activity are the two major areas
currently under investigation by the Rdb/VMS team. In the latest versions of
Rdb/VMS lock operations have improved from 120 (V2.8) to 70 (V3.0) to 50
(V3.1) to 25 (V4.0)
Conclusion:
The VAX 6000-500 series is a first class machine in commercial environments.
This can be seen through Orcale's "preliminary" results. Similar and better
performance (such as in cluster environments) is seen with Rdb/VMS. Fully
TPC-B compliant tests are being run and audited with Rdb/VMS. Results should
be available shortly. No "preliminary" TPC-B results will be released
externally until the benchmark audit is complete.
Digital is committed to releasing industry standard berchmark results ONLY.
As long as Oracle continues to publish results for non-standard benchmarks,
audited or not, it will be difficult to compare their performance and cost
with other vendors. This might be exactly what Oracle is trying to do...
|
813.7 | don't worry .... be happy w/Rdb | KCBBQ::DUNCAN | Gerry Duncan @KCO 452-3445 | Tue Dec 04 1990 00:44 | 39 |
| Re: prices
-----------
I believe the Oracle license prices for the database and TPO are not
correct. According to the price list I have, the prices listed are for
the 6000-460. I find it hard to believe that Oracle will charge the
same license fees for a 39 mip system (6000-460) as they would for a
72 mip system (6000-560).
Re: buffer sizes
----------------
Based on the Oracle tests I have seen, Oracle does not do well when the
application contains a heavy mix of insert, update, and delete operations (as
well as queries). As a result, I do not believe that larger buffers benefit
applications that contain such a mix. In fact, this could be one of the
reasons for their lack of scaling because buffer management was eating up
a good piece of the cpu.
Re: checkpoints
---------------
In the tests we performed, we actually got better performance by reducing the
size of the SGA (global buffer) which forced more DIO. This seemed to help
performance since the processes reading from the database didn't have to work as
hard to find free buffers for their reads. Based on the way Oracle behaved
following this change, I would tend to lean toward more checkpoints instead of
fewer.
Re: check the version
---------------------
Also, it is interesting to note that our tests used 6.0.30.3 which was
shipped to our test location in late October. We ask for the latest release
before our tests and this was what they shipped. The report in .0 references
6.0.30.5 which means some sort of special "tweek" may have been used in this
run.
Rest assured that Rdb will do very well, especially in the complex environments.
Haven't you ever wondered why Oracle doesn't publish TPC-A ?? Well, the
reason is their performance stinks ... big time.
-- gerry
|
813.8 | Was Rdb Invited too ??? | KCBBQ::DUNCAN | Gerry Duncan @KCO 452-3445 | Tue Dec 04 1990 00:59 | 10 |
| Here's the $ 64,000 question:
- Did MSB let our own Rdb people have first shot at the new
machine .... or......
did Oracle get invited first with Rdb ignored ?????
That's the question I would like to see answered.
-- gerry
|
813.10 | but, really.. now | MCDONL::SARRACCO | it blinded me with science | Wed Dec 05 1990 02:18 | 8 |
| On a serious note, I heard that ORACLE , Belmont, CA. debooked
their 9000 ( and I don't know which model) and replaced it with
either a 6540 or 6560.
Seems to me that they're going to do more investigating of that
dreaded VMS scheduler in an SMP configuration.
Kathy
|
813.11 | Joe Oracle strikes again | MCDONL::SARRACCO | it blinded me with science | Wed Dec 05 1990 02:31 | 9 |
| I thought that I understood VMS. and to think that I had to read
about in an Oracle paper.
Gee, Mr. Oracle, I guess I don't understand how WSEXTENT functions
anymore...
(only kidding..)
Kathy
|