[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference turris::digital_unix

Title:DIGITAL UNIX(FORMERLY KNOWN AS DEC OSF/1)
Notice:Welcome to the Digital UNIX Conference
Moderator:SMURF::DENHAM
Created:Thu Mar 16 1995
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:10068
Total number of notes:35879

8484.0. "tar file size limitations" by TMCUKA::ROWELL (Paul Rowell @BBP) Thu Jan 16 1997 12:16

T.RTitleUserPersonal
Name
DateLines
8484.1VAXCPU::michaudJeff Michaud - ObjectBrokerThu Jan 16 1997 12:439
8484.24GB+TMCUKA::ROWELLPaul Rowell @BBPThu Jan 16 1997 13:082
8484.3VAXCPU::michaudJeff Michaud - ObjectBrokerThu Jan 16 1997 14:195
8484.440GB+TMCUKA::ROWELLPaul Rowell @BBPMon Jan 20 1997 10:423
8484.5VAXCPU::michaudJeff Michaud - ObjectBrokerMon Jan 20 1997 10:5610
8484.6SSDEVO::ROLLOWDr. File System's Home for Wayward Inodes.Mon Jan 20 1997 11:4515
8484.7?TMCUKA::ROWELLPaul Rowell @BBPMon Jan 20 1997 13:1512
8484.8NABETH::alanDr. File System's Home for Wayward Inodes.Mon Jan 20 1997 13:3010
8484.9Release NotesTMCUKA::ROWELLPaul Rowell @BBPMon Jan 20 1997 13:426
8484.10VAXCPU::michaudJeff Michaud - ObjectBrokerMon Jan 20 1997 14:3914
8484.11No errors yet apart from the docs!TMCUKA::ROWELLPaul Rowell @BBPMon Jan 20 1997 15:138
8484.12NABETH::alanDr. File System's Home for Wayward Inodes.Mon Jan 20 1997 15:494
8484.1328GB and going strong...TMCUKA::ROWELLPaul Rowell @BBPMon Jan 20 1997 17:424
8484.14dd is ok for at least 35GBTMCUKA::ROWELLPaul Rowell @BBPTue Jan 21 1997 16:584
8484.15NABETH::alanDr. File System's Home for Wayward Inodes.Tue Jan 21 1997 17:286
8484.16doc fixSMURF::KHALLWed Jan 22 1997 14:1416
8484.17answers to AdvFS questions in .10DECWET::DADDAMIODesign Twice, Code OnceMon Feb 03 1997 14:5317
    To answer some of the questions posed in .10:
    
    >Out of curiosity, which type of filesystem is the file being
    >written to?  advfs?  Does advfs support files that big?
    
    Yes, AdvFS will support very large files (theoretical limit 16TB).
    Personally I know of a test that was done on a 500+GB file.
    
    >Doesn't advfs, unless the file is stripped, not support having
    >a single file straddle multiple disk partitions?
    
    There are situations where you could have a file span multiple volumes
    and it not be striped. I believe one of these is when you start run out
    of space on the volume on which the file resides and there is room on
    another volume. AdvFS handles these situations.
    
    						Jan
8484.18Any word if we are fixing this limitation?DECWET::MARIERTue Mar 04 1997 19:034
I'm currently working on a design spec for an enhancement
to AdvFS, and I need to know if I will be able to use tar
to do part of the work.  I need a tape format (other than
vdump/vrestore) which can handle up to 10TB.
8484.19VAXCPU::michaudJeff Michaud - ObjectBrokerTue Mar 04 1997 22:0814
> I'm currently working on a design spec for an enhancement
> to AdvFS, and I need to know if I will be able to use tar
> to do part of the work.  I need a tape format (other than
> vdump/vrestore) which can handle up to 10TB.

	See .6!

	The quick answer is that if you modify the format of the "tar"
	file, then it's no longer really a "tar" file, and can't be
	read by anyone elses "tar" program.  In which case why use tar?

	Of course there could be a POSIX working group lurking, or
	some other standards body, thinking about a "tar-next-generation"
	standard .....
8484.20clarification about size limits in tarRUSURE::KATZTue Mar 11 1997 11:2410
Just wanted to be sure that you are mentioning 10TB as
a filesize, not an archive size. There is no limit to
archive size. The tar/cpio filsize limit is 8GB -1 
(a recent bug fix lets them handle >4GB, patch in the 
works). The pax format should be extended at some
point (check with Andre Belloti for standards feedback)
and I heard of a systemV cpioformat extension a few years
back (not sure what it's size limit went to). Letting
that go unchallenged lets pax/tar fall behind in the
"tar wars".
8484.21tested vs. theoretical limitsDECWET::KEGELAndy Kegel, DECwestWed Mar 12 1997 14:3331
When determining the limits of file systems (I don't know about about
the archiving tools like dd or tar), the develoment teams first determine
the theoretical limit, then select a limit to test.  For example, visit
http://www.zk3.dec.com/filesystem/papers/fs-limits-v40.html
for the Platinum (V4.0) limits tested.  The theoretical limits are almost
always higher than the tested limits (NFS V2 is the obvious exception).
The tested limit becomes the officially supported limit.  It is quite
possible (even likely) that higher values will work just fine, but
no guarantees.  The code is written to handle very large values, but
one never knows until one tries, and trying is expen$ive when you're
talking terabytes.

The POSIX-defined commands are often limited by header definitions, etc.,
as noted in earlier comments.  One must work thru standards bodies to get
these changed.  A couple of years ago, I proposed defining a new tar 
header that used hex instead of octal for the field contents.  That would 
have bought a few more bits of address - and base-36 would have bought 
even more - without changing the header size.  No one was interested.  I
doubt there will be any action in the standards bodies until other big
vendors start losing business as they move into real 64-bit systems ;-)
Dump is system-specific, and not defined by standards, so it can keep up.

By the way, in early 1996, someone called me to report a performance
drop-off in UFS.  They were writing digitized video to files and read
performance suffered after 32Gby.  It turns out that there was a read-ahead
bug that failed beyond 32Gby (a 32-bit computation wrapped and started 
prereading the file from the beginning).  The bug has long since been
fixed, but 40Gby files are here to stay.

	-andy