| The Oracle benchmark methodology is suspect at best. Many US-based
consultants are having a field day kicking Oracle's new test results.
Among them is Omri Serlin, one of the most respected consultants
on benchmark methodology.
What you have entered is essentially the Oracle "party line". It
sounds great, but it won't wash. If they get their new product out
the door in August, I would be amazed. They promised 90 days in
their 18 July announcment. I think the release is closer to October.
For their benchmarks, the new PL/SQL language wasn't used. It wasn't
ready. The databases were memory-resident. TPS was used, but its
one hell of an expensive method for obtaining row-level locking.
Gee, isn't there a database that comes with row-level locking?
Oracle has a track record of over-estimating the number of users
they can support on a system. They used to say 12-15 on a MicroVAX.
That could only be obtained with weeks of fine-tuning. It was virtually
impossible if SQL*Forms was used, and almost every Oracle customer uses
it.
The Oracle numbers are interesting. It will be more interesting
to see if they in any way resemble reality.
It is always pleasant to see internal memos transmitting the party
lines of competitors. Since all their numbers are being made public,
there is nothing "sensitive" in this memo. However, the line about
Oracle's TP1 being very similar to Debit/Credit is a complete joke.
Oracle's "audit" appears flimsy. Its test methodology is suspect
at best. The entire benchmark is being subjected to intense criticism.
I hope no one takes their numbers to heart without looking closely.
---- Michael Booth
|
| An external article for general interest:
From this week's Computerworld (pages 23,28):
ON YOUR MARK, GET SET, FOUL!
By Charles Babcock
Oracle Corps.'s claim of shattering records for transaction processing
includes a curious contradiction: Right after Oracle's stated "world
record for performance" of 265 transactions/sec., it noted that the
results "have been audited and verified by the highly reputable and expert
firm of Codd and Date Consulting Group."
But when you turn to the auditor's report, no such result appears. As a
matter of fact, the report isn't available yet, although the brief, two-
page summary signed by Tom Sawyer of Codd and Date was labeled "Auditor's
Report" by Oracle. Sawyer signed off on the DEC VAX and Sequent benchmarks;
possibly for that reason, their results remain somewhere below the
stratosphere. But whoever audited the mainframe results that make Oracle
look better than Tandem and IBM has yet to step forward.
But wait---there's another contradiction. Oracle states "the industry-
standard TP1 benchmark" was used in achieving its results. As Omri
Serlin, Rich Finkelstein, Frederic Withington and other independent
experts have been saying, TP1 is still loosely defined and there is no
TP1 standard. Serlin is trying to establish such a standard through a
Debit/Credit Council, but Oracle refuses to join.
As if we weren't confused enough, Oracle President Lawrence J.
Ellison has started talking about Oracle's claimed TP1 benchmarks as a
"Debit/Credit benchmark, absolutely the same as Tandem's."
We thought Debit/Credit was our one clear transaction processing
benchmark, defined in an April 1985 DATAMATION article and documented
by Tandem in March 1987. There is nothing in the Oracle performance
report that indicates it included a communications front-end or did
mirrored journaling, as would be required in a Debit/Credit test. Is
Lawrence J. Ellison confused?
But hold on a minute---Oracle's claimed TP1 also contains a major
contradiction. It has been agreed by all previous implementors of TP1
that the transaction begins with a simulated terminal network
generating transactions.
In Cullinet's test of IDMS/SQL, the transactions were generated on
the same machine that was running the benchmark. Sybase avoided this
overhead by running the transaction-generating application on a
separate VAX and feeding the simulated transactions into the benchmark
machine. Oracle criticized Sybase for that step at the time, but in its
mainframe tests, Oracle has carried the maneuver to its ultimate
illogical step: It measured transactions fired off in main memory
without any attempt to mimic a network.
"Oracle has gone to such an extreme in what they have done. It had
no on-line flavor whatsoever," Serlin says.
Let's try to get this straight. In the Oracle version of a benchmark
that simulates banking transactions, all of the bank's customers,
tellers and branches are located inside the bank's central computer.
Not surprisingly, analysts on Wall Street found this to be an
acceptable test. "I was impressed with the quality of the benchmarks,"
Scott Smith of DLJ Securities told COMPUTERWORLD before the flight on
the British Airways Concorde arranged for the announcement.
"The benchmarks were plentiful thorough and well documented," said
Roxanne Googin, and analyst at Needham & Co. in New York.
HOW NOT TO JUDGE A BENCHMARK
Yes, ladies and gentlemen, it was a thick document, but we should
read it as well as weigh it.
We should also read the full auditor's report if, indeed, it ever
becomes available. An auditor reports his exceptions as well as the
aspects of the test he can verify. Sawyer's summary letter states
simply, "These attributes of the benchmark were verified," listing
seven items.
This summary is silent on exceptions. When Sawyer audited the widely
hailed Tandem benchmark in 1987, he was able to find exceptions, and I
suspect his final report on Oracle will read differently from this summary.
Until that report becomes available, I would urge Oracle customers to
ignore the alleged TP1 benchmarks and measure Oracle with their own
applications. Any real-world results will be better than getting taken
on a flight of fancy.
|