T.R | Title | User | Personal Name | Date | Lines |
---|
3027.1 | | STAR::TSPEER | | Mon May 05 1997 08:57 | 45 |
|
> Enable BI-journaling on their Databases before starting their weekly/monthly
> update. If the job or system crashes before the job is finished they want to
> "Roll back" the files to the contents before the job started. Today they have to
> restore the files from backup before restarting the job.
>
> Q1. Will this work and could anyone estimate performance degradation/overhead?
>
This does sound like a BI-oriented scenario, but a few
cautions: Enabling BI will incur roughly the same performance
overhead that enabling AI will; that is, at least an extra IO for each
data file IO, plus extra overhead in opening and closing the
journaled data files. There may be space considerations as
well, since the BI journal(s) can grow quite rapidly depending
on the application (both AI and BI journal RMS buckets, not
records). And all this constant overhead goes to pay for what one expects
is the rare occurrence of a job or system failure. Also, BI is targetted
at application scenarios which may require a partial rollback
to some intermediate point between the beginning of the job
and the point of failure. From your description it sounds as if
your customer will always require a rollback to square one,
which BI certainly can handle but which may not be its most
efficient use. Are intermediate points clearly defined in the application?
If not, and given the performance penalty, they may be better off
with the backup strategy currently in place. (Some other
strategies to consider: 1) AI: a rollforward to a predefined
time/point can be the functional equivalent of a BI rollback to the
same point; but the performance overhead of AI is equivalent,
as I mentioned. 2) Application-managed checkpoints, so that
the reprocessing does not need to start at the beginning).
> If this is possible(as it seems to be) they would like to talk to someone who's
> doing this today before buying the product. Unfortunately I don't know anyone
> using RMS journaling to do this.
>
> Q2. Any hints about references or contacts?
Users of BI are few and far between (AI and RU are the most
popular journaling flavors). I'll see if I can locate
some customer names and send them to you offline.
Tom Speer
|
3027.2 | Consider converting as a way to provide/use backps. | EPS::VANDENHEUVEL | Hein | Mon May 05 1997 11:02 | 49 |
| > If the job or system crashes before the job is finished
That is exactly what BI can protect against for 'normal' crashes.
BI needs a basic file integrety to be there to undo the changes.
In that sense they'll still need to perform the backup.
Are these job / system crashes so frequent they need to worry about this?
Should they not address that?
> Q1. Will this work and could anyone estimate performance
> degradation/overhead?
Tom has the right answer there ofcourse.
> Q2. Any hints about references or contacts?
No, but some other hints...
o First and foremost, be sure the files are reasonably tuned to run those
batch jobs against. It has happened often enough that a batch job can
be improved by a factor of 3 or even 10 (from 6 hours to 2 hours or
to half an hour). Perhaps include a pre-SORT in a specific order?
o Instead of making a backup, that is using the backup time, consider
CONVERTING the file into a shape optimally tuned for the bost
significant batch jobs. This may include dropping a (few) alternate key(s),
bumping the bucket size to 63, adding or removing compression, placing it
on a faster disks (a stripe set with write back caching enabled?) or even,
if they can affort is, creating a big RAMdisk and converting to that.
Next run the job and then covert back to a fresh version.
While you run the job, you can backup the original file.
The end result will be a freshly converted file!
(check out my little backup tool in topic 5 to streamline converting)
o If jobs really run for hours, perhaps they should be re-startable
and print out, or otherwise stash away a occasional check point.
hth,
Hein.
When the job is done, delete
|
3027.3 | Tools = INTERNAL USE ONLY ??? | OSLAGE::AGE_P | Aage Ronning, Oslo, Norway, (DTN 872-8464) | Tue May 06 1997 05:47 | 23 |
| Hi, thank you for the informative answers and useful hints!
First of all, they also plan to use the product for AI journaling.
>> Are these job / system crashes so frequent they need to worry about this?
>> Should they not address that?
It's not system crashes, but application errors like data overrun in fields etc.
They tell me it varies from 2 times a week to every second month.
.-1 You mention the backup tool, but note 5 says:
>>These tools are DIGITAL INTERNAL USE ONLY.
>>Over time, the RMS Engineering group intends to package all tools.
If they buy the product, could the tool be used by the customer?
The customer is very satisfied with the answers so far, but would like to
discuss use and experience with the product with a customer or possibly a DEC
"RMS-consultant". Any suggestions appreciated.
\�ge
|
3027.4 | | STAR::EWOODS | | Tue May 06 1997 12:47 | 1 |
| Hein -- Norway, no less!
|
3027.5 | My tools in 5.* are public as far as I am concerned | EPS::VANDENHEUVEL | Hein | Thu May 08 1997 01:35 | 36 |
|
> First of all, they also plan to use the product for AI journaling.
Makes more sense.
> It's not system crashes, but application errors like data overrun in fields etc.
> They tell me it varies from 2 times a week to every second month.
They should really look for RU journalling then.
This will allow them to 'bundle' for example 20 - 200 updates
together with an update to a 'how far did I get file' and
commit the whole lot. If the application fails in a future lot,
none of those changed will remain visible but at least a known
amount of past work will have been commited.
Next they should really - really lok at proper error handling
such that a single record failure is trapped (establihing
handlers as needed) and logged to an exception file and no
longer blows up a whole run.
>.-1 You mention the backup tool, but note 5 says:
>
>>>These tools are DIGITAL INTERNAL USE ONLY.
>>>Over time, the RMS Engineering group intends to package all tools.
Yeah well, you are right, I indeed wrote that. But that was then,
and this is now. Feel free to use any tool I wrote anywhere.
I stuck mine on the FREEWARE CD anyway, so go for it!
Ofcourse I can not speak for the other submittors but suspect
they feel much the same about it. VMS needs all the support
it can get!
Cheers,
Hein.
|