T.R | Title | User | Personal Name | Date | Lines |
---|
8484.1 | | VAXCPU::michaud | Jeff Michaud - ObjectBroker | Thu Jan 16 1997 12:43 | 9 |
8484.2 | 4GB+ | TMCUKA::ROWELL | Paul Rowell @BBP | Thu Jan 16 1997 13:08 | 2 |
8484.3 | | VAXCPU::michaud | Jeff Michaud - ObjectBroker | Thu Jan 16 1997 14:19 | 5 |
8484.4 | 40GB+ | TMCUKA::ROWELL | Paul Rowell @BBP | Mon Jan 20 1997 10:42 | 3 |
8484.5 | | VAXCPU::michaud | Jeff Michaud - ObjectBroker | Mon Jan 20 1997 10:56 | 10 |
8484.6 | | SSDEVO::ROLLOW | Dr. File System's Home for Wayward Inodes. | Mon Jan 20 1997 11:45 | 15 |
8484.7 | ? | TMCUKA::ROWELL | Paul Rowell @BBP | Mon Jan 20 1997 13:15 | 12 |
8484.8 | | NABETH::alan | Dr. File System's Home for Wayward Inodes. | Mon Jan 20 1997 13:30 | 10 |
8484.9 | Release Notes | TMCUKA::ROWELL | Paul Rowell @BBP | Mon Jan 20 1997 13:42 | 6 |
8484.10 | | VAXCPU::michaud | Jeff Michaud - ObjectBroker | Mon Jan 20 1997 14:39 | 14 |
8484.11 | No errors yet apart from the docs! | TMCUKA::ROWELL | Paul Rowell @BBP | Mon Jan 20 1997 15:13 | 8 |
8484.12 | | NABETH::alan | Dr. File System's Home for Wayward Inodes. | Mon Jan 20 1997 15:49 | 4 |
8484.13 | 28GB and going strong... | TMCUKA::ROWELL | Paul Rowell @BBP | Mon Jan 20 1997 17:42 | 4 |
8484.14 | dd is ok for at least 35GB | TMCUKA::ROWELL | Paul Rowell @BBP | Tue Jan 21 1997 16:58 | 4 |
8484.15 | | NABETH::alan | Dr. File System's Home for Wayward Inodes. | Tue Jan 21 1997 17:28 | 6 |
8484.16 | doc fix | SMURF::KHALL | | Wed Jan 22 1997 14:14 | 16 |
8484.17 | answers to AdvFS questions in .10 | DECWET::DADDAMIO | Design Twice, Code Once | Mon Feb 03 1997 14:53 | 17 |
| To answer some of the questions posed in .10:
>Out of curiosity, which type of filesystem is the file being
>written to? advfs? Does advfs support files that big?
Yes, AdvFS will support very large files (theoretical limit 16TB).
Personally I know of a test that was done on a 500+GB file.
>Doesn't advfs, unless the file is stripped, not support having
>a single file straddle multiple disk partitions?
There are situations where you could have a file span multiple volumes
and it not be striped. I believe one of these is when you start run out
of space on the volume on which the file resides and there is room on
another volume. AdvFS handles these situations.
Jan
|
8484.18 | Any word if we are fixing this limitation? | DECWET::MARIER | | Tue Mar 04 1997 19:03 | 4 |
| I'm currently working on a design spec for an enhancement
to AdvFS, and I need to know if I will be able to use tar
to do part of the work. I need a tape format (other than
vdump/vrestore) which can handle up to 10TB.
|
8484.19 | | VAXCPU::michaud | Jeff Michaud - ObjectBroker | Tue Mar 04 1997 22:08 | 14 |
| > I'm currently working on a design spec for an enhancement
> to AdvFS, and I need to know if I will be able to use tar
> to do part of the work. I need a tape format (other than
> vdump/vrestore) which can handle up to 10TB.
See .6!
The quick answer is that if you modify the format of the "tar"
file, then it's no longer really a "tar" file, and can't be
read by anyone elses "tar" program. In which case why use tar?
Of course there could be a POSIX working group lurking, or
some other standards body, thinking about a "tar-next-generation"
standard .....
|
8484.20 | clarification about size limits in tar | RUSURE::KATZ | | Tue Mar 11 1997 11:24 | 10 |
| Just wanted to be sure that you are mentioning 10TB as
a filesize, not an archive size. There is no limit to
archive size. The tar/cpio filsize limit is 8GB -1
(a recent bug fix lets them handle >4GB, patch in the
works). The pax format should be extended at some
point (check with Andre Belloti for standards feedback)
and I heard of a systemV cpioformat extension a few years
back (not sure what it's size limit went to). Letting
that go unchallenged lets pax/tar fall behind in the
"tar wars".
|
8484.21 | tested vs. theoretical limits | DECWET::KEGEL | Andy Kegel, DECwest | Wed Mar 12 1997 14:33 | 31 |
| When determining the limits of file systems (I don't know about about
the archiving tools like dd or tar), the develoment teams first determine
the theoretical limit, then select a limit to test. For example, visit
http://www.zk3.dec.com/filesystem/papers/fs-limits-v40.html
for the Platinum (V4.0) limits tested. The theoretical limits are almost
always higher than the tested limits (NFS V2 is the obvious exception).
The tested limit becomes the officially supported limit. It is quite
possible (even likely) that higher values will work just fine, but
no guarantees. The code is written to handle very large values, but
one never knows until one tries, and trying is expen$ive when you're
talking terabytes.
The POSIX-defined commands are often limited by header definitions, etc.,
as noted in earlier comments. One must work thru standards bodies to get
these changed. A couple of years ago, I proposed defining a new tar
header that used hex instead of octal for the field contents. That would
have bought a few more bits of address - and base-36 would have bought
even more - without changing the header size. No one was interested. I
doubt there will be any action in the standards bodies until other big
vendors start losing business as they move into real 64-bit systems ;-)
Dump is system-specific, and not defined by standards, so it can keep up.
By the way, in early 1996, someone called me to report a performance
drop-off in UFS. They were writing digitized video to files and read
performance suffered after 32Gby. It turns out that there was a read-ahead
bug that failed beyond 32Gby (a 32-bit computation wrapped and started
prereading the file from the beginning). The bug has long since been
fixed, but 40Gby files are here to stay.
-andy
|