| Peter,
Re: 249.0:
> A customer using V6.0 does 30 trnasfers a night. The last one does a
> purge. One night the last didn't run, so no purge. When next run it
> fell over because the .RUJ grew too big. They moved the .RUJ to a disk
> with more space, but it still failed. RDB$SYSTEM is now around 3.3
> million blocks!
>
> How can they get around this? Does it need the checkpointing feature,
> and will the restrictions on checkpointing prevent it from working? Are
> these restrictions lifted in V7?
By "get around this" I take it you were asking how can they extricate themselves
from the immediate problem, the size of the RUJ file getting too large. From
your reply, below, it appears they have found their own solution: a big disk.
Perhaps what we need to do in the Replication Option is to alter the manner
in which the purging is done. Right now it is done in a single transaction.
Maybe we need a customer-definable limit to the number of rows to be deleted
in a single transaction. I've never looked into doing something like this.
It would need support from Rdb to be able to do something such as
DELETE FROM RDB$CHANGES WHERE ... LIMIT TO 1000.
It's the LIMIT TO I've never tried with a DELETE. If that works, we could
change Replication Option behavior during purging. I'll put this on our list
of potential product enhancements for some future version.
Re: 249.1:
> They cleared the problem by finding an empty disk for the .RUJ and
> deleting all the rows of rdb$changes.
I worry when you say "by deleting all the rows of rdb$changes". Was the dele-
tion done by the copy process or did the customer do it using interactive SQL?
We never delete ALL the rows from RDB$CHANGES. Even if all of them have been
processed, we always leave the last transaction in place. We do that so that
we can perform a sanity check the next time the transfer is reexecuted. We
expect to find that last transaction in the RDB$CHANGES table, even though
we skip over it. If that transaction is not there, it is assumed that the
nuight operator removed the source database and mistakenly replaced it with
a different copy (an earlier backup copy, for example).
- Claude
|
| >It would need support from Rdb to be able to do something such as
> DELETE FROM RDB$CHANGES WHERE ... LIMIT TO 1000.
>It's the LIMIT TO I've never tried with a DELETE. If that works, we could
>change Replication Option behavior during purging. I'll put this on our list
>of potential product enhancements for some future version.
LIMIT TO does not seem to work with DELETE. However it should be
possible to achieve the effect using a compound statement with a FOR
loop.
>I worry when you say "by deleting all the rows of rdb$changes". Was the dele-
>tion done by the copy process or did the customer do it using interactive SQL?
Interactive SQL. I didn't recommend it, nor did they ask me before
they did it. I would have suggested a dummy transfer with a purge.
Peter
|