| T.R | Title | User | Personal Name
 | Date | Lines | 
|---|
| 267.1 | not AGAIN! | SNOC01::ANDERSONK | DESINE at ground zero | Mon Dec 12 1988 03:52 | 8 | 
|  |     Florence,
    
    I know a chap who has used both Adabas and Rdb. I will ask him for
    some info.  The only thing I recall is that the database uses physical
    placement on disk drives, rather than layering on the VMS file system.
    This makes backups and restores more difficult. 
    
    Keith
 | 
| 267.2 | Something isn't right | COOKIE::BERENSON | VAX Rdb/VMS Veteran | Mon Dec 12 1988 17:50 | 3 | 
|  | I have real trouble believing that ADABASE can beat a properly designed
Rdb/VMS database and application by that margin, if at all.  Could you
post more details about the benchmark?
 | 
| 267.3 | SAG is a mess these days | COOKIE::JANORDBY | Lloyd, YOU are no Vice President | Mon Dec 12 1988 21:22 | 32 | 
|  |     For some real FUD...
    
    Software AG of North America has been recently been bought out (hostile
    takeover) by the German based sister (now parent) organization.
    The reason was twofold: 1) the german management team cannot stand
    to have anything not done their way. In other words all ADABAS
    development is done in West Germany by what THEY perceive are the
    market needs with little regard for American users or SAG marketing.
    2)The reason that they were bought out was they were going to lose
    money in the third fiscal quarter of last year, something that had
    never happened before.
    
    A new SAG customer can expect slow service to complex problems because
    they have to be shipped to Germany for resolution (local support
    has little experience and no access to source code). Second, changes
    to the product come only very painfully for North American customers.
    One famous quote from a SAG founder: "The PC is a toy, it will never
    catch on". Unless a need is also experienced in Europe, a resolution
    is unlikely.
    
    One of the SAG support centers (the backroom support) has been closed
    down. Although the function will resurface in the Denver support
    office, the ramp up will take some time. VMS based products usually
    take a back seat at SAG anyway.
    
    Send mail with other questions:
    Jamey
    COOKIE::JANORDBY
    
    I, too, find it hard to beleive a 3 or 4 fold performance benefit
    from ADABAS except onvery specialized functions tailored to an inverted
    list OR against the old version of Rdb.
 | 
| 267.4 | info on the benchmark | HGOVC::SHIRLEYCHAN |  | Tue Dec 13 1988 09:01 | 20 | 
|  |     The mentioned benchmark test is a batch update process. 
    The update process read in transaction records from a RMS file (8844
    records). Perform validation on the input record. Validate against
    10 tables of codes. No. of records in the tables ranging from around
    100 to 3000. If the input records is valid, it is stored into a
    table (about 366000) records. In some case, 1 to 2 records have
    to insert into another table. All rejected or processed records
    are written to update reports and the transaction image is written
    to a RMS file. 
    Indexes are built on all code tables and all validations are search
    through index. It takes 3 hrs to complete the batch (with deferred
    snapshot, 200  buffers each of 6 pages).
    For Adabase, the same algorithms, same transaction data, and same
    db design (ie. same no. of tables, same index). They take 45 min.
    to complete the batch.
    Adabase takes much less I/O than Rdb, about 1/6. Is that the main
    result?
    
    Shirley
    
 | 
| 267.5 | More detail on DB design would help | WIBBIN::NOYCE | Bill Noyce, FORTRAN/PARALLEL | Tue Dec 13 1988 16:53 | 19 | 
|  |     8844 records / 3 hours = 1.3 seconds per record.  Not great.
    
    Are you committing after every record inserted?  That could lead
    to performance that is this bad.  For a batch application like this,
    you should only be committing after a few hundred stores.
    
    Is Adabas committing after every record inserted, too?  In other
    words, what happens to Adabas if the system crashes during this
    application?  Database automatically rolled back to the most recent
    record inserted?  Or something else?
    
    We're not (just) trying to criticize the benchmark, but also trying
    to understand what customers (think they) need.  Here it looks like
    there might be a performance/robustness tradeoff.
    
    Is the customer interested in retrieving this data after it's loaded?
    If not (after all, he's not measuring it!) then you shouldn't have
    any index on the relation being loaded.  If he does need indexes
    on this data, please tell us about it (including placement, etc).
 | 
| 267.6 | Could be that validation etc | SNOC01::ANDERSONK | My DEBIT/CREDIT performance is lousy | Wed Dec 14 1988 03:56 | 13 | 
|  |     Were you loading into an area with hash or sorted indexes on it?
    If so, which type?  If HASH, are you allowed to use the PLACE as
    described in the RDB conference. Ian Smith made a mistake in
    calculating page and buffer sizes, and the hash load took over 10
    hours.  He found the mistake, corrected it and the load took 52
    minutes! That was 100,000 account records (as per TP1) but no extra
    validation etc was being done. 
    
    Was it permitted to multistream the load? helps if data is partitioned
    in  some way.
    
    I'll post some comments on ADABAS I have received from a friend,
    when I get back to the office (where my notes are).
 | 
| 267.7 | We acheive similar result as Adabase | HGOVC::SHIRLEYCHAN |  | Thu Dec 15 1988 08:39 | 22 | 
|  |     I am back to answer some of the questions.
    
    The update commits once after each transaction. It is because there
    would be some other on-line users updating to the database and doing
    queries. Also it requires to generate a serial no. to the record
    in order to make the primary key unique.
    
    All indexes are b-tree index. The validation code tables are very
    small, ranging from 100 to 4000.
    
    We tried to build some b-tree indexes to hash index (without
    clustering).But that didn't help.
    
    Adabase also commits once after each transaction.
    
    Finally, we increased the number of buffer to 250. We acheive similar
    performance as Adabase. We completed the batch in 53 minutes and
    using less CPU time than Adabase. The point is increasing the no.
    of buffer greatly decrease the amount of disk I/O.
    
    Thanks
    Shirley 
 |