|
NetWorker Recover limitation analysis
Steven D. Yee
Last Modified: 5/22/97
PROBLEM:
Recover has problems browsing large savesets
HISTORICAL:
This problem was first detected during system test using a 1 Million
file, file tree, during both recover and Save Set Consolidation testing.
You can also reference GNATS PR 566 for further information.
SYMPTOMS:
1. During recover's marking/browsing phase (add) the error:
recover: Not enough space
would appear and the operation would abort.
2. Save Set Consolidation (SSC) utilizes recover to figure out
what to consolidate. You might see messages like:
savegrp: found volume name 'Unknown' -- aborting.
savegrp: group consolidate.group aborted.
savegrp: killing pid xxxx
nsrd: savegroup alert: consolidate.group aborted
in the /nsr/logs/daemon.log after which the consolidation group will
show it being "not finished".
NOTE: for SSC, you might not see any error messages in the GUI
messages frame other than an abort.
REASON:
The recover process, while browsing the indices is building internal
structures, and it eventually runs out of space. This is a client side
limitation, and does not require changes on remote servers.
WORKAROUND:
There is a local, and a permanent change:
1. Local:
Before starting the recover process, run the limit command and set the
datasize to be unlimited.
Then run recover as normal.
This change only affects the shell from which recover is spawned and
it's children, all other open windows/shells will have the original,
default, datasize.
2. Global:
Change the dfldsiz parameter in the kernel configuration file (this is
the default data size). Don't make it bigger than maxdsiz (the maximum
data size)
Rebuild the kernel and reboot the machine.
Now every process that starts will have a large default datasize.
POSSIBLE PROBLEMS:
If you see the following message:
Unable to obtain requested swap space
or if swapon -s is showing 100% utilization while recover is running, you
probably need to add more physical swap to the machine.
METRICS:
The 1M file, filesystem appeared to take up ~28000 pages of virtual
memory to do the add (on a system where 47064 pages == 367Mb of swap) and
it marked 1001002 files (including directories)
|