[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference ulysse::rdb_vms_competition

Title:DEC Rdb against the World
Moderator:HERON::GODFRIND
Created:Fri Jun 12 1987
Last Modified:Thu Feb 23 1995
Last Successful Update:Fri Jun 06 1997
Number of topics:1348
Total number of notes:5438

1216.0. "Rdb/VMS limits" by IJSAPL::OLTHOF (Henny Olthof @UTO 838-2021) Sun Dec 20 1992 22:01

I just got involved in a project that requires an Rdb/VMS database of about 44
Gigabytes (data including indexes). Some people in Digital are concerned about
this database size because they have heared about the '50 Gigabytes limit'.

If I remember well, the 50 GB limits is derived from the fact, that we can
backup a database of that size online on 3 parallel drives in 8 hours (night
shift). Is that true?

Please provide me (as an extra assurance) with the logical and physical
limits in Rdb/VMS (eg #tables, #rows/table, #fiels/table, maximum size of
storage area's etcetera). With these figures I hope to prove, that the database 
size is comfortly within Rdb/VMS current limitations. Note 1124 asks the same
question but was never answered.

Thanks for your help,
Henny Olthof
T.RTitleUserPersonal
Name
DateLines
1216.1NOVA::DIMINOMon Dec 21 1992 22:2935
>> I just got involved in a project that requires an Rdb/VMS database of about 44
>> Gigabytes (data including indexes). Some people in Digital are concerned about
>> this database size because they have heared about the '50 Gigabytes limit'.

>> If I remember well, the 50 GB limits is derived from the fact, that we can
>> backup a database of that size online on 3 parallel drives in 8 hours (night
>> shift). Is that true?

Actually it's all nonsense. The limits published in the SPD just
reflect the size we were willing to support at the time that version
was released. BACKUP places no limit on DB size. RMU/BACKUP scales
linearly with system capacity, so if you are willing to pay more
and digital offers the hardware capacity you can backup more in the same
time, or the same in less time. If tapes are your problem just add more tapes,
if disks are your problem spread the DB over more disks, and if CPU
is the problem substitute a faster one. You have to stop when
you can't expand the system any further, that's probably about
30 Gbytes/hour for a VAX and 80+ Gbytes/hour for an AXP system and a
single RMU/BACKUP command. If you are willing to do more
than one backup at a time, even that may not be a limit.


>> Please provide me (as an extra assurance) with the logical and physical
>> limits in Rdb/VMS (eg #tables, #rows/table, #fiels/table, maximum size of
>> storage area's etcetera). With these figures I hope to prove, that the database 
>> size is comfortly within Rdb/VMS current limitations. Note 1124 asks the same
>> question but was never answered.

Rdb does not have programmed limits. All the practical limits are
generous for almost all applications. What did you need to know
that isn't in the SPD?

>> Thanks for your help,
>> Henny Olthof

1216.2Database size is a function of many thingsNOVA::BERENSONDatabase Architecture, Standards, and StrategyMon Dec 28 1992 17:3038
Rdb technically can support databases much larger than 50GB.  The real
question is, how "gracefully" can it do so.  The answer to that question
is highly dependent on the details of the application environment.  For
example, a database with lots of read only storage areas can
realistically be very very large (100s of GB) today.  A database with
lots of allowable downtime can also be very very large.  Beyond that, the
limits are dependent on things like how long does it take to build an
index on the largest table in the database, and can the customer afford
to take that table off-line for that period of time.

50GB is a current GUIDELINE for what Rdb V4.1 and V4.2 can support.  For
extremely high availability and very dynamic database structure (ie
daily/weekly DDL changes) environments it may be more reasonable to limit
a single database to the 15-25GB range(*).  For very static database
structure or substantial down-time windows (ie, a weekend a month) it's
reasonable to have 100+ GB.

The strategy is to dramatically increase the "gracefully" supported number
over the next couple of years.  The first major increase is about a year
away.

Hal

(*) There are techniques available for handling larger databases with
extremely high availability using current products.  For example if the
application uses RTR and its shadow server capability then it is possible
to have an index definition strategy in which you fail the primary
causing the secondary to take over processing.  An index is then defined
on the primary copy of the database.  The primary is brought back on-line
and allows to catch up with transactions it has missed.  The secondary is
then failed and the same index defined.  When index definition is
complete the secondary is returned to service and back transactions are
applied to its copy of the database.  One key drawback is that during
the index definition period there is no shadow server available to take
over if the system processing transactions fails!  So, you'd still want
to use this technique at the lowest-volume,
least-painful-in-case-of-failure, periods.

1216.3ThanksIJSAPL::OLTHOFHenny Olthof @UTO 838-2021Wed Dec 30 1992 19:572
    Thanks for two very helpfull answers,
    Henny