[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference decwet::networker

Title:NetWorker
Notice:kits - 12-14, problem reporting - 41.*, basics 1-100
Moderator:DECWET::RANDALL.com::lenox
Created:Thu Oct 10 1996
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:750
Total number of notes:3361

51.0. "UNIX install/setup tips" by DECWET::ONO (The Wrong Stuff) Mon Oct 21 1996 14:50

T.RTitleUserPersonal
Name
DateLines
51.1DECWET::ONOSoftware doesn't break-it comes brokenFri Dec 06 1996 16:214
51.2DECWET::ONOSoftware doesn't break-it comes brokenMon Dec 09 1996 08:487
51.3Recover Memory Limit, possible work around.DECWET::SDYSupport Novice...seeking enlightenmentFri May 30 1997 14:5087
                     NetWorker Recover limitation analysis
                                 Steven D. Yee
                            Last Modified: 5/22/97

   
   
   PROBLEM:
   
      Recover has problems browsing large savesets

   HISTORICAL:
      
      This problem was first detected during system test using a 1 Million
      file, file tree, during both recover and Save Set Consolidation testing.
      You can also reference GNATS PR 566 for further information.

   SYMPTOMS:

      1. During recover's marking/browsing phase (add) the error:

            recover: Not enough space

         would appear and the operation would abort.

      2. Save Set Consolidation (SSC) utilizes recover to figure out
         what to consolidate. You might see messages like:

            savegrp: found volume name 'Unknown' -- aborting.
            savegrp: group consolidate.group aborted.
            savegrp: killing pid xxxx
            nsrd: savegroup alert: consolidate.group aborted

         in the /nsr/logs/daemon.log after which the consolidation group will
         show it being "not finished".
         
         NOTE: for SSC, you might not see any error messages in the GUI
         messages frame other than an abort.

   REASON:
      
      The recover process, while browsing the indices is building internal
      structures, and it eventually runs out of space. This is a client side
      limitation, and does not require changes on remote servers.

   WORKAROUND:

      There is a local, and a permanent change:

      1. Local:

         Before starting the recover process, run the limit command and set the
         datasize to be unlimited.

         Then run recover as normal.

         This change only affects the shell from which recover is spawned and
         it's children, all other open windows/shells will have the original,
         default, datasize.

      2. Global:
   
         Change the dfldsiz parameter in the kernel configuration file (this is
         the default data size). Don't make it bigger than maxdsiz (the maximum
         data size)

         Rebuild the kernel and reboot the machine.

         Now every process that starts will have a large default datasize.


   POSSIBLE PROBLEMS:
      
      If you see the following message:

         Unable to obtain requested swap space

      or if swapon -s is showing 100% utilization while recover is running, you
      probably need to add more physical swap to the machine.


   METRICS:

      The 1M file, filesystem appeared to take up ~28000 pages of virtual
      memory to do the add (on a system where 47064 pages == 367Mb of swap) and
      it marked 1001002 files (including directories)