T.R | Title | User | Personal Name | Date | Lines |
---|
325.1 | set dir/ver=2 [*...] | BARAKA::LASTOVICA | Norm Lastovica | Wed Oct 01 1986 13:05 | 1 |
|
|
325.2 | RE: 325.1 | SNICKR::SSMITH | | Wed Oct 01 1986 14:29 | 9 |
| RE: .1
If I'm not mistaken, that will do about the same thing as enabling
quota. These disks are project, and chip design disks and the people
really can't have space limited. BUT, like the saying goes, "out
of sight, out of mind." When they don't have to worry about it,
they don't think about it. What I'm looking for is something that
will let me FLAG problem areas and allow me to keep on top of the
situation.
|
325.3 | AI to the rescue? | NACHO::CONLIFFE | Boston in 89!! | Wed Oct 01 1986 15:25 | 10 |
| Bob Tycast has an embryonic "expert system" called EXPURGE which
will scan thru directory trees and report odd and forgotten files which
would be candidates for deletion/purging/whatever. It produces a command
file with its "recommendations" which can be editted and will then
delete/purge the culprits. I can't remember where the thing is,
but I suggest you look in the EXPERT conference on NOVA. (KP7 will
do the trick).
Nigel
|
325.4 | multiple version finder | COMET::ROBERTS | Dwayne Roberts | Wed Oct 01 1986 18:02 | 3 |
|
DIR [*...]*.*;-4
|
325.5 | SPM or tools | IE0002::KPDEV | | Wed Oct 01 1986 19:12 | 3 |
| My understanding was that spm had a section which would report
disk space allocation/usage. Might also check the toolshed as there
are a number of fragmentation reporters there...
|
325.6 | /SELECT=SIZE=MIN=xxx useful, too | JON::MORONEY | %SYSTEM-S-BUGCHECK, internal consistency failure | Wed Oct 01 1986 21:42 | 7 |
|
$ DIR/SIZE/SELECT=MIN=SIZE=xxx [*...] will show all files larger than xxx.
Add *.*;-4 will show only large files with 4 versions above it, too.
Add *.*;-4,*.*;-5,*.*;-6,*.*;-7 and that should get (nearly) all of them.
-Mike
|
325.7 | Use AI!!!! | BRDWLK::VEALEK | Stealth_net proponent | Thu Oct 02 1986 19:07 | 20 |
| Take a look at EXPUNGE, an A1 tool that aids the VMS user in the
task of ridding their accounts of files no longer needed but which
are taking up valuable system resources -- ie. disk blocks.
I tried it on my own directory structure and it "suggested" 6290
blocks of files that I might want to delete.
It's fun, it's a stab at A1, and it doesn't hurt anything on the
disk, it justs suggests.....
Are you interested????
The notes file is on NOVA::EXPERT under the topic EXPUNGE. The
EXPUNGE system can be found on either:
YIPPIE""::AI$SYSTEM[OPS5.EXPUNGE]*.* OR
BACH""::WRKD$:[TYCAST.EXPUNGE]*.*
Maybe you can help the creator with more rules for his system!!!!
Ken Veale
|
325.8 | In the Toolshed | REGENT::MINOW | Martin Minow -- DECtalk Engineering | Fri Oct 03 1986 09:25 | 5 |
| Try CLEANOUT.COM, which purges a directory tree, then deletes
files that match specific criteria.
Martin.
|
325.9 | DIR, FIND n | TLE::AMARTIN | Alan H. Martin | Sun Oct 05 1986 17:49 | 21 |
| Does anyone have a hack which approximates the Tops-20 command:
@DIR filespec,
@@FIND (FILES WITH MORE THAN) n (GENERATIONS)
It lists all but the n *lowest* generations of a file. Thus, the
default FIND value of 1 lists all the files which the equivalent of
a DCL PURGE command would delete. Combined with totals of the file
sizes, you can figure out exactly how much space is tied up in old
generations with just one command:
@DIR filespec,
@@FIND
@@NO FILE-LINES
@@SIZE
The best I could come up with would be a PURGE/CONFIRM[/LOG?] which took
its query value from a file filled with "N"'s, but I couldn't figure out
how to do that. And it still wouldn't tell you directly how much space was
tied up in such files.
/AHM/THX
|
325.10 | Oh Well | VAXUUM::DYER | The Weird Turn Pro | Tue Oct 07 1986 04:01 | 3 |
| It seemed to me that DIR file/EXCLUDE=(;0,;-1,;-2,...) should work, but
it didn't.
<_Jym_>
|
325.11 | Thanks anyhow | TLE::AMARTIN | Alan H. Martin | Thu Oct 09 1986 13:23 | 6 |
| Unless the "..." in "/EXCLUDE=(;0,;-1,;-2,...)" is literal DCL notation,
I don't really consider it a solution, anyhow. DCL can count better
than I can, so I shouldn't have to supply it with that much information.
Thanks for trying, anyhow.
/AHM/THX
|
325.12 | Buy more disks | CASEE::COWAN | Ken Cowan | Sun Oct 12 1986 10:14 | 20 |
|
I've seen a tool that uses ANAL/DISK/QUOTA to produce a report for
each user on how much space they are taking up. The CLT cluster
triggers it when the free space gets below a certain threshold.
We found it remarkably sucessful becuase people use peer pressure
to get others to clean up the disk when it was getting full.
BTW, I also firmly believe in 'buy more disks'. Given the way
we pay for capital equipment, it is cheaper to buy more than
have very expensive employees spend their time optimizing use
of the resources. The standard figures I've seen in DEC are
that an employee costs 3-4 times his salary. A $30,000 employee
costs $45 per hour [$30,000 * 3 / 50 week / 40 hours]. Figure
10 people spending .5 hour a week cleaning up files, and it
costs $11,250 per year [$45 * 10 people * 25 hours/year]. Does it
sound expensive? What if people get hassled daily about cleaning
up space because of a shortage? Then it comes to $56K per year.
KC
|
325.13 | Use DISKQUOTA for accounting, not enforcement | MDVAX3::COAR | A wretched hive of bugs and flamers. | Tue Nov 24 1987 13:24 | 6 |
| Disk quotas make a marvelous accounting tool, and do not NEED to
be used for enforcement. Give everybody a quota of a billion blocks,
and they'll probably never run out.. but a DISKQ>SHOW [*,*] will
reveal where all the space has gone.
#ken :-)}
|