| Aargghh...
Seems I got messed with the numbers ! The legend of a D/C graph says
'Relational', 'Codasyl' and 'Flat File', so I thought it was using RMS and not
DECintact/Hash. Thanks for making a blind man see !!
But...
Can anybody tell me, what performance (roughly estimated) one could expect,
using ACMS/Cobol/RMS compared to ACMS/Cobol/Rdb. Does RMS performance degrade
if the amount of data increases ?
- .1 : the application is a (german) production planning system, where the
SW-House committed rewrite it for ACMS/Rdb/DECforms. (NOT YET DONE). They don't
know how many data, probably 1 GB and 200 Users (maybe 120 at a time).
Thanks,
Andreas
|
| Various people have done various studies on RMS vs Rdb/VMS performance.
Perhaps some of them will step in. In the meantime, I'll make some
guesses ...
If you use RMS with journaling turned *off*, performance can range
from 1/2 to 2x of Rdb/VMS performance, depending on things like
whether Rdb can take advantage of its fancy placement, indexing,
and join techniques, and whether RMS can take advantage of its simpler
file structure and global buffers.
If you turn on RMS recovery-unit journaling, performance gets much
worse, so that it's uniformly worse than Rdb/VMS for comparable
operations.
The first important question for your customer, therefore, is whether
he needs journaling.
|
| I got this through the grapevine. At least ten forwarding headers have
been deleted.
Report on a Comparison between RMS and Rdb/VMS for an Investment Bank
---------------------------------------------------------------------
Redjep Arikan July 1989
Paul Armour (Sales)
Johannes van Vuren
Introduction
------------
(Paul to add here an overview and background sketch of the project,
highlighting the competitive issues, customer's perception of Digital,
why we were asked to bid, how we performed during the tests etc).
During March 1989 we were invited to bid for a TP system for an investment bank
in the City of London. The Bank had some experience of Digital's RMS
product but no experience of Rdb/VMS and were doubtful as to the performance of
relational database management systems in general and Rdb/VMS in particular.
The Bank was seeking a better solution to the following problem:-
They receive, daily, a tape (called a transaction file containing about
80,000 records) from the Stock Exchange and need to match the contents
of this file against their internal database consisting of a Deal file,
a Client file and a Stock file. For selected stock exchange records,
a record is generated and written to a report file.
The whole process is to complete within 2 hours.
However, due to the large volume of data they were dealing with during their
off-line days, their existing application was taking longer than they were
prepared to accept. As a result we were asked to propose a better system,
whether this would be another RMS based system or a new system based on
Rdb/VMS. In order to determine which would perform better, we were asked to
simulate their existing system using Rdb/VMS while they would do their own
simulation in RMS using more powerful VAX processors.
A solution based on Rdb/VMS
---------------------------
The Rdb database consisted of five relations: STOCK, CLIENT, DEAL_IN, DEAL_OUT
and REPORT. These were defined as described below under the section on Loading
of Data. The processing environment is shown below.
------------
Stock Exch
------------
. . .
. . .
. . .
. . .
. . .
. . .
------- . ------
STOCK (R/W) . CLIENT (R/W)
------- . ------
-------
DEAL_IN (R)
.--------.
. .
. .
. .
. .
. .
. .
-------- ----------
REPORT (W) DEAL_OUT (W)
------- --------
(R/W = Read/Write, W = Write, R = Read)
The simulation in Rdb/VMS consisted of the following steps:
1 For each Stock Exchange record read serially, a match is sought
against the brought-forward Deal table (DEAL_IN). Whether or not a
matching record is found, three new records are written to the
carry-forward Deal table (DEAL_OUT).
2 Each Stock Exchange record, read serially, is then indexed matched
to the Stock table in the Rdb database. Where a match is found the
Stock relation is updated.
3 For every four Stock Exchange records read, one record is written
to an Rdb table called REPORT.
4 For each Stoch Exchange record, read serially, a match is sought
against the Client table. Where a match is found, the Client table is
updated in situ.
The Bank's own simulation of the four steps above used the folowing parameters:-
------------------------------------------------------------------------------
1 They rewrote their existing application using FDL's, pre-allocated files
distributed over four RA90's. COBOL was used as the 3GL.
2 H/W: 8820, 4 x RA90
3 Did not use RMS Journalling.
RESULT:
-------
Measured time to process 80,000 SE records was 1 hour 45 minutes.
( On their existing system they got a performance of 2hour 50 min on an 8600).
Digital's simulation using Rdb/VMS was based on the following:-
-------------------------------------------------------------
1 Same H/W: 8820 and 3 x RA90.
2 Multi-file database, 10 storage areas.
3 Tables CLIENT and STOCKS partitioned in 3 parts each and each table on its
own disk. HASH indexing for CLIENT and STOCK, sorted indexing for DEAL_IN.
4 Cobol used for 3GL application using RDBPRE.
5 Test performed in batch mode (ie single process).
6 Snapshots and After image journalling were disabled.
RESULT:
-------
Measured time to process 80,000 SE records was 47 minutes.
Loading of data into Rdb relations
----------------------------------
The DEAL_IN, STOCK and CLIENT relations were loaded with a Cobol program from
RMS files as follows:
1 DEAL_IN: 200k records, sorted index defined with data pre-sorted before
loading.
Measured time to load: 17 min.
2 STOCK: 120k records, hash index defined with placement_via option and
DB_KEYS sorted.
Measured time to load: 13 min.
3 CLIENT: 30k records, hash index defined with placement_via option and
DB_KEYS sorted.
Measured time to load: 4 min.
Conclusions
-----------
(Paul to add here how the bank's perception of Digital has now changed, what
potential there is now for future business in the bank etc).
These results dispelled the Bank's doubts about the performance of Rdb/VMS in
a large volume batch environment and enabled Digital to win this important
bid. But it also accentuated the fact that good Rdb design skills are now
imperative if the powerful functionality and performance of the product are to
make a sale succeed. To be relational is not enough. We also need to be
excellent physical database designers. Towards this end, the new design tools,
DECtrace and RdbExpert are going to play a very important role in our future
competitive advantage. Had these tools been available during this excersize,
our credibility with this customer would have been even greater.
To Distribution List:
BRUCE LANGSTON@LAO,
MIMI THAI@LAO,
WARREN KARZEN@CWO,
GERRY DUNCAN@KCO,
GRAHME JENSEN@IVO,
CHIP SCHOOLER@IVO,
ALLEN KASEY@IVO,
ANDY ANDERSON@IVO
|