T.R | Title | User | Personal Name | Date | Lines |
---|
1892.1 | And any tools / scripts out there? | USPS::FPRUSS | Frank Pruss, 202-232-7347 | Thu Feb 20 1997 17:02 | 4 |
| And do we have any tools to automate creation and sizing of 1000 OF lsm
virtual units, plexes and DRD's?
FJP
|
1892.2 | And what about SAP? | USPS::FPRUSS | Frank Pruss, 202-232-7347 | Thu Feb 20 1997 17:38 | 3 |
| Just talked to a benchmark center guy & sez SAP uses 9000 TABLES.
Hmmm...
|
1892.3 | some comments | ALFAM7::GOSEJACOB | | Fri Feb 21 1997 04:43 | 42 |
| re .0
>He is going through a worksheet and some Implementation guides and is
>coming up with about 1 Tb and over 1000 unique tables.
I have no idea what the calculations in that sizing guide look like.
One of the questions to ask is how the sizes add up to 1 TB; e.g. is
this something like: you'll have these 1000 tables and they all be
around 1 GB if you have 100 000 customers. So what I'm driving at: the
calculations may become inacurate the bigger the application gets.
Keep in mind: this is just wild speculation on my side.
>Now, from what I hear, each Oracle table will get at least one DRD (the
>maximum size of DRD that Oracle use being 14 Gb).
Where did you get that from? I mean the table/drd mapping. Two comments
here:
1) Yes there are limits within Oracle for the maximum size of a
datafile. I can't remember the exact formula but the max datafile
size depends on the Oracle block size. And the absolut maximum of
14GB sounds about right.
2) Oracle tables are allocated in so called extends. One table can have
several extends up to a limit. All extends of one table must be
allocated in one tablespace (this limitation exists in Oracle7). A
tablespace can hold any number of tables. A tablespace comprises 1
or more datafiles of any kind (raw, UFS, AdvFs) and size.
So there may be a recommendation to put each table in a seperate
tablepace for performance reasons but there is no strict rule to
that and its also a pretty unusual approach with any larger number
of tables.
As an approch to the other extreme you may decide to throw all
your drds into one single tablespace and allocate extends for all
1000 tables there.
So if you have to build a 1 TB database you only need 74 drds (74 *
14GB > 1 TB). You may want to use more for better control over which
table goes onto which disk.
Martin
|
1892.4 | perl it! | LEXSS1::GINGER | Ron Ginger | Fri Feb 21 1997 09:31 | 16 |
| As to tools for building large configurations, I have written a couple
perl scripts that do the hsz config and the LSM config. I create a
table of the basic units and these scripts issue all the needed
commands.
I did this as a part of a disaster recovery plan. At a disaster site it
is unlikely you will have identical hardware config as your home site.
And sitting at a disaster site dragging around a mouse on a gui did not
strike me as efficient use of the allowed time. So all we need to do is
look over our hardware, re-write a couple control tables and run the
scripts. I can re-build our 16 HSZ's and all LSM structures in 5-10
minutes.
Sorry, but the scripts are not generic, they have many assumptions
built in that match our site, so they would be of little use to anyone
else. But you should be able to do the same thing for any site.
|