[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference decwet::networker

Title:NetWorker
Notice:kits - 12-14, problem reporting - 41.*, basics 1-100
Moderator:DECWET::RANDALL.com::lenox
Created:Thu Oct 10 1996
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:750
Total number of notes:3361

377.0. "3.2 NSR and 4gb filesystem to save" by KERNEL::NICHOLSONA (one suite too many can cause truth decay) Mon Feb 03 1997 11:10

    
    
    I have a customer who has a very large database file to be precise
    over 4gb. This is Digital Unix version 3.2c and NSR 3.2.
    
    He is using compressuasm directive on the client, he finds that
    it gets to 600mb the throughput to the server dropped to 40k a second.
    Then the client box  disks were could be seen to be doing a lot of
    access but nothing was going over to the server and does not appear
    to complete the saveset.
    
    If does not use compress it works fine other than creating two
    savesets.
    
    I can see on legato support page and searching through the technical
    bulletins it talks about there being a maximum filesize for a save to
    be 4gb and says in an addendum how to break up a filesystem into two
    equal parts to be ready for the save. 
    
    "For NetWare versions of NetWorker prior to 4.0, refer to `Backing up
    Large SaveSets' in Appendix B of the Legato NetWorker Administrator's 
    Guide, NetWare Version, shipped with your software, for instructions on
    how to handle save sets larger than four gigabytes."
    
    Also I find that on the patches for 3.2 there is patch mentioned
    for not being able to save indexes larger than 4gb and I was wondering
    if this is the same problem.  (nsrv32a-003)
    
    My questions are
    
    1) How do I get a larger than 4gb filesystem backed up?
    2) Is it possible to backup a larger than 4gb filesystem with
       in compression mode?
    3) Is the patch relevant to this problem.
    4) Is the addendum relevant and if so where can I find it?
    
    Many thanks
    Avril
    CSC Unix Group UK
T.RTitleUserPersonal
Name
DateLines
377.1DECWET::ONOSoftware doesn't break-it comes brokenMon Feb 03 1997 11:5121
>    1) How do I get a larger than 4gb filesystem backed up?

  This should work fine, however, see reply to 2).

>    2) Is it possible to backup a larger than 4gb filesystem with
>       in compression mode?

   However, there may be a compressasm problem.

>    3) Is the patch relevant to this problem.

  No.  The index patch does not relate to this at all.

>    4) Is the addendum relevant and if so where can I find it?
    
  The Addenda for the various versions can be found on our ftp
  site - ftp://www.zso.dec.com/pub/networker/documentation

Regards,

Wes
377.2DECWET::FARLEEInsufficient Virtual um...er....Mon Feb 03 1997 13:5821
As you noticed, large savesets get broken up into multiple "chunks"
when they cross the 2GB boundary.  These chunks are handled by NetWorker,
and should be pretty much invisible to you other than looking at the
size of each saveset, and the funny-looking <1>, <2>... continuation
savesets.

I have never seen this behavior cause problems for being able to restore
files or filesystems.

Compressasm is always a trade-off between client CPU and time, and 
network bandwidth.  In the case of a database, often they are filled
with zeroes for any non-allocated space.  ALL of these zeroes will be
compressed out (except for a very minor overhead), so probably what you're 
seeing is that the disk/filesystem on the client are frantically reading
zeroes from the disk, and sending them to compressasm, which will send a very
small trickle of data on to the server, having compressed out the bulk of
the zero-fill data.  Thus, the server sees a very slow/small datastream
and it looks as if things are moving very slowly.  I'd bet, however, that
bits are being read from disk at the same rate in both cases.

Kevin