[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference ulysse::rdb_vms_competition

Title:DEC Rdb against the World
Moderator:HERON::GODFRIND
Created:Fri Jun 12 1987
Last Modified:Thu Feb 23 1995
Last Successful Update:Fri Jun 06 1997
Number of topics:1348
Total number of notes:5438

544.0. "TP: Rdb or RMS ?" by HAN01::DOERING (Andreas Doering, SWAS Hannover @HAO) Fri Jan 19 1990 16:21

This time I don't need help on competitors, but on one of our own products:
For a major account we're offering an ACMS/Rdb/DECforms system (VAX 6xxx).

That customer is considering, whether to use RMS or Rdb to store data. As is
said in one of the last benchmarks, on a 6310 D/C, RMS does 17 TPS and 
Rdb V3.0 does 8.4 TPS. As the customer is worried for performance, he told us
to document the advantages and disadvantages of using Rdb in this case.

I know, Rdb V3.1 is performing slightly better, so the difference is a bit
smaller then. My question:

Besides good reasons of consistency, integrity, security, can anybody show me
some hints, showing Rdb as the better/only solution for a commercial envir.,
e.g. 'RMS max. file size not bigger than  ...MB/GB' etc.

Thanks for any help
Andreas
T.RTitleUserPersonal
Name
DateLines
544.1real life vs. debit/credit...CSOA1::CARLOTTII hate when golf season ends :-(Fri Jan 19 1990 17:3214
>>>That customer is considering, whether to use RMS or Rdb to store data. As is
>>>said in one of the last benchmarks, on a 6310 D/C, RMS does 17 TPS and 
>>>Rdb V3.0 does 8.4 TPS. As the customer is worried for performance, he told us
>>>to document the advantages and disadvantages of using Rdb in this case.

The debit/credit logical database design is a rather simple one which does 
not simulate a normal customer application with complex data relationships. 
Rdb will probably outperform RMS in most moderately complex applications 
where multiple keys, hashed indexes and atomic transactions are required.

The big questions are...what's the application?  how many users?  how much 
data?  etc.?

Rick C
544.2Huh? Where is the RMS number from?COOKIE::BERENSONWords are a deadly weaponFri Jan 19 1990 21:123
Excuse me.  WHere does it say RMS does 17 TPS?  Maybe if journaling were turned
off!  With journaling turned on, I would expect RMS to have trouble getting over
1 TPS.
544.3DECIntact's Figures ?ODIHAM::JONES_SI framed Roger Rabbit !Sun Jan 21 1990 22:486
    Remember the DR/CR figures published by us are for DECIntact using
    DECIntact Hash files, and DECIntact's own journaling, and not good
    old RMS and RMS journaling !
    
    Steve J
    
544.4Where did you find those numbers ?STKHLM::WAHLQVISTMon Jan 22 1990 10:0110
            
    I think that you are looking at the numbers for DECintact HASH files.
    Are they really going to use DECintact and HASH-files. That has not very 
    much to do with normal RMS. Standard RMS just can't perform that good 
    when RUJ/AIJ is turned on. 
    
    /uw
    
    PS If you really got any D/C-numbers for RMS, then please send me a
    copy because I have never seen any numbers before DS
544.5Messed the numbers, but...HAN05::DOERINGAndreas Doering, SWAS Hannover @HAOMon Jan 22 1990 10:3116
Aargghh...
Seems I got messed with the numbers ! The legend of a D/C graph says 
'Relational', 'Codasyl' and 'Flat File', so I thought it was using RMS and not 
DECintact/Hash. Thanks for making a blind man see !!

But...
Can anybody tell me, what performance (roughly estimated) one could expect,
using ACMS/Cobol/RMS compared to ACMS/Cobol/Rdb. Does RMS performance degrade
if the amount of data increases ?

- .1 : the application is a (german) production planning system, where the 
SW-House committed rewrite it for ACMS/Rdb/DECforms. (NOT YET DONE). They don't
know how many data, probably 1 GB and 200 Users (maybe 120 at a time).

Thanks,  
Andreas
544.6Any database will doMAIL::DUNCANGGerry Duncan @KCOMon Jan 22 1990 13:0610
    Seems to me you would want a database of some kind even if there
    was a performance hit.  The advantages would be:
    
    - more options for file placement
    - better tuning options
    - enhanced recovery (eg more quickly) from application/system failure 
    - ease of use for developers
    - simplified backup and recovery procedures 

    -- gerry
544.7The important question is...WIBBIN::NOYCEBill Noyce, FORTRAN/PARALLELMon Jan 22 1990 14:4216
    Various people have done various studies on RMS vs Rdb/VMS performance.
    Perhaps some of them will step in.  In the meantime, I'll make some
    guesses ...
    
    If you use RMS with journaling turned *off*, performance can range
    from 1/2 to 2x of Rdb/VMS performance, depending on things like
    whether Rdb can take advantage of its fancy placement, indexing,
    and join techniques, and whether RMS can take advantage of its simpler
    file structure and global buffers.
    
    If you turn on RMS recovery-unit journaling, performance gets much
    worse, so that it's uniformly worse than Rdb/VMS for comparable
    operations.
    
    The first important question for your customer, therefore, is whether
    he needs journaling.
544.8internal use only (as always)SRFSUP::LANGSTONAsk me about RALLYFri Jan 26 1990 18:30171
    I got this through the grapevine. At least ten forwarding headers have
    been deleted.
    
    
    
    
    Report on a Comparison between RMS and Rdb/VMS for an Investment Bank
---------------------------------------------------------------------
Redjep Arikan 		July 1989
Paul Armour (Sales)
Johannes van Vuren

Introduction
------------
(Paul to add here an overview and background sketch of the project, 
highlighting the competitive issues, customer's perception of Digital,
why we were asked to bid, how we performed during the tests etc).

During March 1989 we were invited to bid for a TP system for an investment bank
in the City of London. The Bank had some experience of Digital's RMS
product but no experience of Rdb/VMS and were doubtful as to the performance of
relational database management systems in general and Rdb/VMS in particular.

The Bank was seeking a better solution to the following problem:-

	They receive, daily, a tape (called a transaction file containing about
	80,000 records) from the Stock Exchange and need to match the contents 
	of this file against their internal database consisting of a Deal file, 
	a Client file and a Stock file. For selected stock exchange records,
	a record is generated and written to a report file.
	The whole process is to complete within 2 hours. 

However, due to the large volume of data they were dealing with during their 
off-line days, their existing application was taking longer than they were 
prepared to accept. As a result we were asked to propose a better system, 
whether this would be another RMS based system or a new system based on 
Rdb/VMS. In order to determine which would perform better, we were asked to 
simulate their existing system using Rdb/VMS while they would do their own 
simulation in RMS using more powerful VAX processors.

A solution based on Rdb/VMS
---------------------------
The Rdb database consisted of five relations: STOCK, CLIENT, DEAL_IN, DEAL_OUT
and REPORT. These were defined as described below under the section on Loading 
of Data. The processing environment is shown below.

			       ------------
			        Stock Exch
		               ------------	
     		              .      .      .
                            .        .        .
                           .         .         .
                          .          .          .
                         .           .           .
                        .            .            .
                -------              .              ------
		 STOCK	(R/W)  	     .	            CLIENT (R/W)
                -------              .              ------
                                  ------- 
 				  DEAL_IN (R)
                                .--------.
                               .          .
                              .            .
                             .              .
                            .                .
                           .                  .
                          .                    .
                 --------                       ----------
                 REPORT	(W)  		         DEAL_OUT (W)
                 -------                         --------

(R/W = Read/Write,  W = Write,  R = Read)

The simulation in Rdb/VMS consisted of the following steps:

	1 For each Stock Exchange record read serially, a match is sought
	  against the brought-forward Deal table (DEAL_IN). Whether or not a
 	  matching record is found, three new records are written to the 
	  carry-forward Deal table (DEAL_OUT).
	
	2 Each Stock Exchange record, read serially, is then indexed matched
	  to the Stock table in the Rdb database. Where a match is found the 
	  Stock relation is updated.

	3 For every four Stock Exchange records read, one record is written 
	  to an Rdb table called REPORT.

	4 For each Stoch Exchange record, read serially, a match is sought 
	  against the Client table. Where a match is found, the Client table is 
	  updated in situ.


The Bank's own simulation of the four steps above used the folowing parameters:-
------------------------------------------------------------------------------
1 They rewrote their existing application using FDL's, pre-allocated files 
  distributed over four RA90's. COBOL was used as the 3GL.

2 H/W:	8820, 4 x RA90

3 Did not use RMS Journalling.

RESULT:
------- 
Measured time to process 80,000 SE records was 1 hour 45 minutes.
( On their existing system they got a performance of 2hour 50 min on an 8600).


Digital's simulation using Rdb/VMS was based on the following:-
-------------------------------------------------------------
1 Same H/W: 8820 and 3 x RA90.

2 Multi-file database, 10 storage areas.

3 Tables CLIENT and STOCKS partitioned in 3 parts each and each table on its
  own disk. HASH indexing for CLIENT and STOCK, sorted indexing for DEAL_IN.

4 Cobol used for 3GL application using RDBPRE.

5 Test performed in batch mode (ie single process).

6 Snapshots and After image journalling were disabled.

RESULT: 
-------
Measured time to process 80,000 SE records was 47 minutes.



Loading of data into Rdb relations
----------------------------------
The DEAL_IN, STOCK and CLIENT relations were loaded with a Cobol program from
RMS files as follows:

1 DEAL_IN:  200k records, sorted index defined with data pre-sorted before 
	    loading.
	    Measured time to load: 17 min.

2 STOCK:    120k records, hash index defined with placement_via option and 
            DB_KEYS sorted.
	    Measured time to load: 13 min.

3 CLIENT:   30k records, hash index defined with placement_via option and
	    DB_KEYS sorted.
	    Measured time to load: 4 min. 


Conclusions
-----------
(Paul to add here how the bank's perception of Digital has now changed, what
potential there is now for future business in the bank etc).

These results dispelled the Bank's doubts about the performance of Rdb/VMS in
a large volume batch environment and enabled Digital to win this important
bid. But it also accentuated the fact that good Rdb design skills are now
imperative if the powerful functionality and performance of the product are to
make a sale succeed. To be relational is not enough. We also need to be 
excellent physical database designers. Towards this end, the new design tools,
DECtrace and RdbExpert are going to play a very important role in our future
competitive advantage. Had these tools been available during this excersize,
our credibility with this customer would have been even greater.

To Distribution List:

BRUCE LANGSTON@LAO,
MIMI THAI@LAO,
WARREN KARZEN@CWO,
GERRY DUNCAN@KCO,
GRAHME JENSEN@IVO,
CHIP SCHOOLER@IVO,
ALLEN KASEY@IVO,
ANDY ANDERSON@IVO