[Search for users]
[Overall Top Noters]
[List of all Conferences]
[Download this site]
Title: | DBSTARS Conference |
|
Moderator: | BROKE::BASTINE |
|
Created: | Wed Feb 02 1994 |
Last Modified: | Thu Jun 05 1997 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 791 |
Total number of notes: | 1521 |
777.0. "Creating Multifile target dbs" by BROKE::BITHER () Wed Apr 30 1997 11:55
<<< NOMAHS::DISK$NOMAHS1:[NOTES$LIBRARY]DEC_DATA_DISTRIBUTOR.NOTE;1 >>>
-< The Replication Option for Rdb >-
================================================================================
Note 284.0 use ddal$shared_write when target is multifile? 1 reply
BROKE::BITHER 36 lines 29-APR-1997 13:12
--------------------------------------------------------------------------------
Hi,
I wanted clarification on how to perform replication update transfers
to multifile target databases where the target tables need to be in
different storage areas. I've come up with the following. Could
someone please comment on whether this will work and if not, is there
an alternative?
Thanks, Diane
--------------------------------
Replication Option v7.
1. Create multifile database and storage map definitions
2. Create target tables.
3. Add column DDAL$DBKEY to each target table.
4. Create transfer and make it "TO EXISTING" filename.
5. define shared write logical:
def ddal$shared_write yes/no OR
def ddal$shared_write_transfername yes/no
6. create prologue file to add indexes and triggers (if desired) after
initial data load.
(This is a separate question, but, if they add indexes and triggers with
a prologue file and need to reinitialize a transfer, will the reinitialization
remove the indexes and triggers before the data transfer and then readd them
by executing the prologue file? Or are they left untouched? I was thinking
that if the indexes were not dropped, the data transfer would be
significantly slower so was hoping that they are dropped in the case
of a reinitialization. Otherwise the performance enhancement from
creating the indexes after the transfer (prologue) would only occur once -
when the replication option is run the very first time.)
Thanks, Diane
================================================================================
Note 284.1 use ddal$shared_write when target is multifile? 1 of 1
BROKE::PROTEAU "Jean-Claude Proteau" 70 lines 30-APR-1997 07:41
-< Hope this helps >-
--------------------------------------------------------------------------------
Hi, Diane,
>I wanted clarification on how to perform replication update transfers
>to multifile target databases where the target tables need to be in
>different storage areas. I've come up with the following. Could
>someone please comment on whether this will work and if not, is there
>an alternative?
>
>Thanks, Diane
>--------------------------------
>Replication Option v7.
>
>1. Create multifile database and storage map definitions
>2. Create target tables.
>3. Add column DDAL$DBKEY to each target table.
>4. Create transfer and make it "TO EXISTING" filename.
>5. define shared write logical:
> def ddal$shared_write yes/no OR
> def ddal$shared_write_transfername yes/no
>6. create prologue file to add indexes and triggers (if desired) after
> initial data load.
I suggest executing the initial transfer in these steps.
Step 1: Create a single file target database, create the transfer "TO EXISTING",
define the environment variable that transfers only the data definitions
but not the data, then execute the transfer. What you end up with is a
single file target database with the necessary ROR tables, and with the
customer tables defined, including the extra DDAL$DBKEY column. Shared
write is unnecessary, unless want other transfers or local applications
to also insert data into those tables.
Step 2: Export/import the target database, in the process creating storage areas
and mapping tables to the new storage areas. I believe I tried a simple
case of this, but I'm not 100% sure there's something else that will
come up to bite you. Still, this seems like the best way to go.
Step 3: Create triggers and any placement via indexes. Indexes on non-hashed
tables might be better added after the data are transferred, to save
on execution time.
Step 4: Delete the no data ev, reinitialize the transfer, and redo the transfer.
Because you reinitiialzed it, this will do another Initial transfer as
opposed to a replication update. That means the data will be trans-
ferred this time. Only now it will end up where you want it.
I suggest this be prototyped by the customer using a small database to prove
that the concept works. I didn't get anybody to try this for me during field
test.
>(This is a separate question, but, if they add indexes and triggers with
>a prologue file and need to reinitialize a transfer, will the reinitialization
>remove the indexes and triggers before the data transfer and then readd them
>by executing the prologue file? Or are they left untouched?
ROR does not remove any indexes or triggers that the customer has added.
The customer would have to do that in the prologue file.
>I was thinking
>that if the indexes were not dropped, the data transfer would be
>significantly slower so was hoping that they are dropped in the case
>of a reinitialization. Otherwise the performance enhancement from
>creating the indexes after the transfer (prologue) would only occur once -
>when the replication option is run the very first time.)
I think I answered this above. Hashed indexes with palcement via are the
problematic ones. You need the indexes in place in order to store information
where you want it. If the indexes don't care where the information is stored,
you can drop the indexes yourself in a prologue and add tem in in an epilogue.
T.R | Title | User | Personal Name | Date | Lines
|
---|