[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference 7.286::digital

Title:The Digital way of working
Moderator:QUARK::LIONELON
Created:Fri Feb 14 1986
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:5321
Total number of notes:139771

1144.0. "System Management Policies" by PACKER::STIMSON (Thomas) Tue Jul 17 1990 09:25

Some system managers believe that it is a waste of disk space to keep more than 
one copy, possibly 2, or at most 5 revisions of a file in a user account. This
may be true for general users for whom the last revision of a text file is
frequently the only useful one. The situation is much different for software
developers, however. The set comprising the latest revision of each source
module may not even work. Even if it does, there may be sets of (much) older
revisions that together provide different - but desired - functionality, or
which are needed to support an earlier release of the software.

Developers can avoid keeping multiple source code revisions by using CMS. 
However, not having been successful at getting the VAX SET stuff on my system,
I have had to make do with a homegrown psuedo-MAKE utility, which at least
allows me to keep track of what module revs do what, but which keeps all the
revs, not just the deltas.

Anyway, I was appalled to discover recently that the system manager had
wiped out most of my source code by doing a PURGE/KEEP=2 [...] on my directory
- at a time when there were 450,000 free blocks on my user disk. (My source
code had occupied less than 5,000 blocks. I kept few revisions of anything
else.)

Other information:	
	- No method of time-based archiving is in use on my system.

	- Access to the deleted files was (S:RWE,O:RWED,,).

How many DECcies feel that it is proper for a VAX system manager to PURGE a
user's account without his or her advice or consent ? 

T.RTitleUserPersonal
Name
DateLines
1144.1Must have good lines of communication!A1VAX::BARTHSpecial KTue Jul 17 1990 09:3514
If the system mgr hasn't made a policy of doing it in the past, and s/he
hasn't informed the user community that it _is_ a policy, then it's
bad news.

Once again, it all boils down to communication.  

It's amazing how easily one can conclude (perhaps rightfully) that the
sys mgr is on a power trip (or incompetent or malicious or...) when s/he
doesn't keep those bothersome end-users informed.

I trust the author of .0 has "clarified" the situation for the system
manager?

K.
1144.2WMOIS::FULTITue Jul 17 1990 10:0418
I'm sorry, I do not agree that any sys. mgr. has the right to just delete
away files, no matter what the reason. If the issue is disk usage then they
can set up quotas for disk space and have the users justify why they need more.
But, to just take it upon themselves to decide which files are or are not
to be kept in my honest opinion is just their way of telling the users that
THEY are in charge and THEY are the GOD of the system.

Even with communication, I dont agree that they would be justified in doing it.
All that the purging of files is going to accomplish in the long run is to
cause the user to be more creative with file names.

i.e. file.dat;1 could be renamed file_dat.v1 and
     file.dat;2   "   "     "    file_dat.v2


this would ensure only 1 version of a file.

- George
1144.3customer satisfactionODIXIE::CARNELLDTN 385-2901 David Carnell @ALFTue Jul 17 1990 10:139
    
    No manager would be presumptuous to enter the desk cabinet drawer of
    any employee and arbitrarily just throw out papers.  Why is ANY manager
    allowed to destroy  arbitrarily electronic documents?  What if the
    source code for VMS had been destroyed in such a manner 10 years ago? 
    Do system managers "own" their systems, able to take any action at
    will, or are they charged with being the guardian of a company asset,
    "supporting" their customers, the end users on that system.
    
1144.4protect yourself if you canCVG::THOMPSONAut vincere aut moriTue Jul 17 1990 10:2915
	I've generally been lucky enough to work only on systems with good
	system managers (especially those years I WAS the system manager).
	Still I've always kept my own backups. For years I backed up my
	account to tape regularly. Now days I back it up to the local disk
	on my clustered workstation. Incimentals nightly and the whole account
	over the week end. If I suspected a system manager might be like the
	one in .0 I'd probably do a noon time incimental as well. 

	System managers should never delete someone elses files with out
	warning. They probably should compress people's notebooks though.
	I've seen 10,000 block notebooks that no one knew about. But that's
	different because information isn't lost. Deleting files, even old
	versions, does cost information lose.

				Alfred
1144.5in the fieldATLACT::GIBSON_DTue Jul 17 1990 12:046
    re .0
    
    Well, you're not alone.  The system manager for this system does a
    basic purge, so there is only one version of the file.  Were we told?
    No.  I found out in time to avoid any serious affects and just rename
    the versions I want to keep.  No waves, mon, no waves.
1144.6You guys are tough!RBW::WICKERTMAA USIS ConsultantTue Jul 17 1990 12:0926
    
    I'd hate to be a system manager for most of this conference's audience!
    You guys are tough!
    
    While I may not agree with what this system manager did (unless the
    disk was full and then everything changes!) I can understand his desire
    to properly manage the company's assests. With the current lack of
    capital to upgrade most systems it becomes necessary to police the
    current usage. 
    
    I have managed systems where the user's didn't know about the PURGE
    command and we routinely had to purge disks because of it. We only did
    it when space ran low but we still did it. And we do it now on our OA
    systems. You guys will probably say educate the users and I agree, in
    principal, with that. However, with 2000 constantly changing users spread 
    across 4 states it's a little tough. 
    
    It's common practice (or at least has been with every programmer I've
    worked with) to manually save "good" baselevels of modules. This is
    either done by placing them into a subdirectory, renameing them to
    .SAVE or using CMS. It's always easier to use hindsight in this type of
    thing but I'd still suggest using something similar to protect
    yourselve, even from yourselve. We all make mistakes and trying to
    control-y out of that purge command is a little futile...
    
    -Ray
1144.7Make the effort to implement what you really needSVBEV::VECRUMBADo the right thing!Tue Jul 17 1990 12:1219
    re .0

    I have a couple of other users on my workstation. Also, my workstation
    is my only regular VMS account. What I normally do is to back-up only the
    latest versions of files (with the consent of other users), and, if space
    gets low, I ask if it's O.K. to purge down to the last couple of versions.

    If you want to protect something, remove write and delete access from the
    file. And inform your system manager, who should not be using "BYPASS"
    privelege to purge files.

    The real answer is to have your system/cluster manager install VAXSET.
    Have your manager make a formal request to your systems group. It's not
    worth expending energy on what's done -- just move forward to make sure
    this problem won't recur.


    /Peters
1144.8has your account been pacified today?ISLNDS::HAMERTue Jul 17 1990 14:309
    Change the word "user" to "customer" and see if the perspective
    on the relationship between system manager and users changes. I
    believe a user **is** the system manager's customer.
    
    Managing a resource for Digital by destroying files used to do the
    company's business reminds me of a quotation from Tacitus: They
    make a wilderness and call it peace.

    John H.
1144.9Never PURGE someone else's files without at least /KEEP=2!!!COVERT::COVERTJohn R. CovertTue Jul 17 1990 16:515
Purging down to one version of all files can cause valuable data to be
lost in the case of editors which start to create the new version of the
file before they are completely finished processing the old version.

/john
1144.10BUNYIP::QUODLINGDa doo run run, da doo run runTue Jul 17 1990 17:285
   Of course, if you are running a Unix (tm) variant operating system, this
   doesn't become a problem... :-)
   
   q
   
1144.12I'm only responsible for my own workstation & project now...LOWELL::WAYLAY::GORDONand my imaginary friend Wally...Tue Jul 17 1990 19:4431
	I used to manage a 20+ node MiVc with a fairly substantial disk farm.
Certain disks (the production build tree scratch space disk, for example)
were never backed up, and when they filled, I emptied them.  In theory,
everything on them really lived somewhere else (in CMS, for all the source
code.)

	Users had 20k blocks each on the "user" disk for private files
(mail, notebooks, etc.)  If you ran out of "personal" space, tough.
The "project" disks were group readable and we made it understood that 
there was implicit permission to browse them as needed for project purposes.

	Every Sunday morning, a little batch job ran that located, by
owner, all files with more than 5 versions on all the disks.  Each person
got a mail message with a list of the files they owned with more than 5
versions.  I got a list of all such files.

	I never had to purge anyone's user account.  I did, on occaision
have to selectively delete from project accounts when we were strapped
for space.  In all cases,  I knew I'd have to be the one answering for it
if I slipped.

	Were I in the basenoter's shoes, I would complain, in writing, to
the system manager, detailing that the lack of required software on the
system (CMS) was the reason for the extra versions, and I would copy the
system manager's manager, and your manager.  It's never a good idea to
piss off a system manager, but some people shouldn't be one in the first
place.

				Good Luck.

					--Doug
1144.13Hit the road jack, don't you come back no moreEAGLE1::BRUNNERMoonbase AlphaTue Jul 17 1990 20:0014
As a system manager, you have no business deleting, purging, or reading my
files. If space is a problem, set-up quotas or educate the user. I will not
tolerate any environment where someone else touches my private files. A
system manager as described in .0 would be canned *very* fast where I work
without a tear of remorse.


*** Rathole ***

>                                                          What if the
>   source code for VMS had been destroyed in such a manner 10 years ago? 
    
Oh how, I wished someone had. Then today we'd have millions of lines of C
instead of MACRO after they rewrote it...   :-) :-)
1144.14Strengstens verboten!!COUNT0::WELSHTom Welsh, UK ITACT CASE ConsultantWed Jul 18 1990 05:5860
	The system manager's action was inexcusable. It should be
	grounds for both

	(1) Preliminary disciplinary action against the system manager
	    on the grounds that he has wilfully destroyed corporate
	    assets and harmed another employee's work,

	    AND

	(2) Remedial discussions with the manager(s) in charge of that
	    system manager, to ensure that in future they manage in the
	    interests of their customers - the users.

	A system manager is NOT a tin god, nor is he the "owner" of the
	systems he runs. He is there to facilitate the work of the users.
	This MUST be made ABSOLUTELY CLEAR to everyone, without any
	reservations or qualifications.

	re .6 (Ray Wickert):

>>>    While I may not agree with what this system manager did (unless the
>>>    disk was full and then everything changes!) I can understand his desire
>>>    to properly manage the company's assests. With the current lack of
>>>    capital to upgrade most systems it becomes necessary to police the
>>>    current usage.

	Fair enough, but this must be done by consent only. As someone
	else observed, it's fine to set quotas and insist they be observed.
	However, it is not acceptable, for instance, to "trim" the users'
	files until they fit the assigned quotas. If a user has 86,000
	blocks and you give him a quota of 50,000, there is no alternative
	but to discuss the issue and cooperate to find acceptable ways of
	reducing the space used. 

>>>    I have managed systems where the user's didn't know about the PURGE
>>>    command and we routinely had to purge disks because of it. We only did
>>>    it when space ran low but we still did it. And we do it now on our OA
>>>    systems. You guys will probably say educate the users and I agree, in
>>>    principal, with that. However, with 2000 constantly changing users spread
>>>    across 4 states it's a little tough.

	I sympathize with your situation. But I still have to insist that
	NOTHING justifies deleting a single file belonging to a user without
	his or her permission. Maybe one person should not be in charge
	of 2000 users across 4 states? Maybe Digital employees should have
	an easily-attained minimum level of computer literacy (say, like
	an intelligent 12 year old) so that they do know about the PURGE
	command?

	Apart from anything else, automatic PURGE/KEEP=1 commands reduce
	the VMS file system to the level of RSTS ten years ago (I wouldn't
	be surprised if RSTS has multiple file versions by now) or Unix(tm).

	As several people have remarked, anyone developing software, even
	in a group of one, would be well advised to start using CMS, if only
	for the space saving benefits. That would allow you to keep all
	versions back to the beginning (CMS saves only the changes).

	/Tom

1144.15ESCROW::KILGOREWild BillWed Jul 18 1990 09:3433
    
    I don't know, people, I find it hard to turn this into a
    black-and-white situation. Perhaps it's my DECsystem20 upbringing, but
    I've always had a certain disdain for multiple generations. And VMS
    contributes to this viewpoint:
    
    o  The SET DIR /VERSION_LIMIT allows for situations where you can lose
       potentially valuable files without warning. If people were seriously
       counting on multiple generations, you would think they'd have added
       a message in this case,
    
    		$ SET DIR disk:[dir] /VERSION_LIMIT=3
    		$ COPY A.A []
    		disk:[dir]A.A;6 copied to disk:[dir]A.A;7 (2 blocks)
                disk:[dir]A.A;4 deleted due to generation limit
    
    o  The default situation (32,767 gens) can result in an
       unconscionable amount of wasted disk space. Again, if multiple
       generations was a serious feature, a simple message could keep
       people aware of the situation,
    
    		$ COPY A.A []
    		disk:[dir]A.A;6 copied to disk:[dir]A.A;7 (2 blocks)
                3 generations exist for disk:[dir]A.A
    
    So, I can understand the mind set that would allows a system manager to
    nonchalantly purge files, and I would be hard pressed to rage violently
    against this type of behavior -- at least the first time.
    
    And I maintain that if you're counting on multiple generations to keep
    a history of development work, or to support different baselevels, then
    you pretty much get what you paid for. Insist on CMS, or rename files.
    
1144.16exPACKER::JOHNWed Jul 18 1990 09:4213
    Manybe you all have it wrong!!!! maybe .0 is not telling the whole
    story!!!
    
    Maybe such a person had been asked repeatedly to cut down his disk
    space!!  maybe sucha a person had 4-7 copies of every file on his disk
    maybe he had say 200,000-400,000 blocks of disk space on the cluster
    and never reduced it as requested!
    
    Before KILLING the system manager you should all get the whole story.
    
    JUST maybe!!
    
    Ted
1144.17Still no excuse...KL10::WADDINGTONWadda ya mean, WE?Wed Jul 18 1990 09:5619
re .16

>    Manybe you all have it wrong!!!! maybe .0 is not telling the whole
>    story!!!
    
>    Maybe such a person had been asked repeatedly to cut down his disk
>    space!!  maybe sucha a person had 4-7 copies of every file on his disk
>    maybe he had say 200,000-400,000 blocks of disk space on the cluster
>    and never reduced it as requested!
    
>    Before KILLING the system manager you should all get the whole story.
    
>    JUST maybe!!
  

This is still no excuse for deleting files.  It's a potential discipline problem
that should be discussed with the user and the user's management.  Deleting a
couple of hundred thousand blocks of data could potentially cost this company
millions of dollars.
1144.18What is the REAL story here?SCAACT::AINSLEYLess than 150 kts. is TOO slowWed Jul 18 1990 10:2121
As .16 said, maybe we don't know the entire story.

Let me tell you a story with the names/locations omitted to protect the guilty.

I know of an internal site that was using CMS and was constantly running out
of disk space, bringing an entire development effort to a halt at least once a
day.

The system manager was prohibited from setting up either disk quotas or version
limits on directories.

The solution?  When the disks filled up, a wilcard purge was done.  

What was causing the disk to fill up?  A combination of huge development
executables (>10K blocks each) and insufficient resources.  The system was
supporting twice as many developers as it was configured for.

More hardware was ordered, but with budget constraints and long lead times,
this situation went on for months before more disk space was acquired.

Bob
1144.19ROYALT::GONDADECelite: Pursuit of Knowledge, Wisdom, and Happiness.Wed Jul 18 1990 10:316
�   Of course, if you are running a Unix (tm) variant operating system, this
�   doesn't become a problem... :-)
    
    And you are not using GNUemacs as your editor!  Otherwise GNUemacs
    artificially keeps your previous version around too in a filename~
    file.
1144.20why not use alternate storage media?IMBIBE::CHURCHENothing endures but changeWed Jul 18 1990 10:348
    
    I have the misfortune to be managing an internal system, and I
    often run into this disk space problem (because we only have one!).
    I just backup to tape whatever files I plan to delete.  So, if I
    delete something valuable, we can always get it back online.
    Saves me a lot of headaches.
    
    jc
1144.21BUNYIP::QUODLINGDa doo run run, da doo run runWed Jul 18 1990 10:5521
re            <<< Note 1144.13 by EAGLE1::BRUNNER "Moonbase Alpha" >>>
              -< Hit the road jack, don't you come back no more >-

>>                                                          What if the
>>   source code for VMS had been destroyed in such a manner 10 years ago? 
>    
>Oh how, I wished someone had. Then today we'd have millions of lines of C
>instead of MACRO after they rewrote it...   :-) :-)

Double Rathole...
   C? Gag me with a compiler... You are wrong on two counts anyway. More of
   VMS is written in Bliss than in Macro, and I doubt if you could convince
   many of the VMS Devos to switch to C...
   
   I just dug back through some old mail... And back in 86, CW hobbs
   determined that VMS was then 1.4 Million lines of Macro, 2.8 Million Lines
   of Bliss, and about .75 MIllion lines of others... It would have grown
   significantly since then...
   
   q
   
1144.22HANNAH::MESSENGERBob MessengerWed Jul 18 1990 11:5344
Re: .0

>Developers can avoid keeping multiple source code revisions by using CMS. 
>However, not having been successful at getting the VAX SET stuff on my system,
>I have had to make do with a homegrown psuedo-MAKE utility, which at least
>allows me to keep track of what module revs do what, but which keeps all the
>revs, not just the deltas.

If you don't use CMS then at least create a subdirectory for each version
of your program that you want to keep.  It's just plain reckless to rely
on multiple versions of a file as a replacement for CMS.  I can see keeping
multiple versions of a file for a few days while you're working on it, but
as soon as it's working to the point where you'd ordinarily check it in to
CMS, clone the working version into a subdirectory and do a PURGE.  From time
to time you can free up space by deleting some of the intermediate cloned
versions.

Now if the system manager did a purge during the "few days" that you had
multiple versions of your files, then you have good reason to be upset.

>Anyway, I was appalled to discover recently that the system manager had
>wiped out most of my source code by doing a PURGE/KEEP=2 [...] on my directory
>- at a time when there were 450,000 free blocks on my user disk. (My source
>code had occupied less than 5,000 blocks. I kept few revisions of anything
>else.)

On my system I've often seen notices such as "there will be a backup of
DISK$FOO tonight, followed by a PURGE/KEEP=2".  I think this is reasonable,
because (a) they back up the files before purging, giving me a chance to
recover if something valuable got purged, and (b) they gave notice that
the disk would be purged.  Backups are done on a regular schedule, so everyone
knows what to expect.

>	- No method of time-based archiving is in use on my system.

In that case it is YOUR responsibility (or your project leader's) to ensure
that important project data is backed up.

I think you and the other people on your project (if any) should sit down with
your system manager and agree on a set of ground rules, like who is responsible
for backups and under what circumstances can files be purged.  Otherwise you'll
probably have many future incidents like this one.

				-- Bob
1144.23LESLIE::LESLIEFor Sale to highest bidderWed Jul 18 1990 17:3018
    
    
            <<< Note 1144.6 by RBW::WICKERT "MAA USIS Consultant" >>>
>    While I may not agree with what this system manager did (unless the
>    disk was full and then everything changes!) I can understand his desire
>    to properly manage the company's assests. With the current lack of
>    capital to upgrade most systems it becomes necessary to police the
>    current usage. 
    
    No. Definitively wrong. It is up to the users to police their useage.
    AT most I'd expect the System Manager to act as an information conduit
    with a message such as "the disks are getting full, please type HELP
    PURGE if you have nl idea what this command means". I have a job to do,
    so do you. Your job as a system manager is to manage systems. Mine is
    to manage my account.
    
          
    /andy/
1144.24Information missed by the originator of the noteCRONIC::ANSTINEWed Jul 18 1990 17:4032
    In response to .0 I must refer you to .16 - "Maybe .0 is not telling
    the whole story".
    The customer failed to mention that he was sent repeated mail messages
    to clean up his account because the device he was on only had 6% free
    space.
    The customer failed to mention that his account was being backed up to
    tape and that he again recieved mail asking him to purge his account.
    The customer failed to mention that he has never requested, through his
    manager or software services, that CMS be installed.
    I have to agree that the whole story was not told, with certain pieces
    still being left out.  I agree that a "System Manager" does not have
    the right to do a whole sale purge of the customer disk, unless
    requested to do so, in writing, by the system owner.  Had the customer
    consulted the policy and procedures in this facility he would have
    realized that software services only monitors the system disk and since  
    we do not allow customer accounts on it, have full authority over it.
    The customer would also have found that only the cost center manager,
    or their designated person, has the authority for establishing disk
    quotas on customer disk.  Since his tenure in this facility, he should
    also have known that operations coverage is provided 24 hrs./day - 7
    days/week; which accounts for 6 incrementals and one full backup of his
    account a week.
    Normally I would not take the time to respond to this type of 
    mis-information, however this customer has insulted the integerity of
    my group.  Finally, had the customer taken the time to look into this
    matter, he might have realized that it was not the "System Manager"
    that did the purging of the account.
    
    
    tom anstine
    SCMT Software Manager
    
1144.25LESLIE::LESLIELooking for a job in New EnglandWed Jul 18 1990 18:115
    Tom
    	thanks for taking the time and trouble to set the record straight.
    
    
    /andy/
1144.26Don't Shoot - I'm Only the Sys MuggerMORO::THORNBURG_DODTN 535-4569 Irvine CAWed Jul 18 1990 19:3329
    All flaming aside, either pro or con lynching system managers, is
    egregious in the light of a single fact:
    
    THERE AIN'T NO STINKING SYSTEM MANAGEMENT POLICY IN THIS COMPANY.
    
    Yes, there are mutterings in the Orangebook, and local versions of how
    it ought to be done. But there is not a single, uniform, corporate-wide
    concise set of directions, instructions, guidelines, checklists, or
    what-have-yous on how to actually run one of these silly machines we
    sell. 
    
    I just spent a fruitless 2 hours perusing 5 notes files looking for
    just such a set of guidelines (in this case, for a "Data Center
    Operations Audit" that a customer is just DYING to give us MONEY to
    perform for them); I found multiple requests for anything of this sort,
    but no replies. 
    
    I am a system manager in an ACT - I have it easy. No production,
    PERIOD. You want a production machine, call DIS - they have the staff
    and the guidelines to run 'em. My systems' charter is to show our
    products and 3rd party products to customers. Once a given demo is
    over, NUKE 'EM BUCKO! Because the next demo is rolling in the door as
    the last one rolls out. Before this, I ran systems for DEC's Computer
    Service Business (did you know we used to sell timesharing services?)
    The total lack of guidance on "How to Run a Computer" cost us a ton of
    $$$ and untimately blew us out of that business. And every one of the
    30-odd CSB centers developed their own distinct, unique, non-standard
    cookbook on "how it should be done". And all 30 got thrown away 5 years
    ago. What a stinking waste!
1144.27JUMBLY::DAYNo Good Deed Goes UnpunishedThu Jul 19 1990 09:0833
    Systems Management is at least as much an art as it is a science.
    How you play things depends very much on the type and scale of
    system/systems you are dealing with.
    
    There are, however, some invariant rules :
    
    . The primary function of a System Manager is to provide the best
      service possible to ALL users. This may conflict with the
      needs of SOME users ..
    
    . No user has as his/her/its primary concern the most effective
      usage of resources
    
    . Some users are going to be computer illiterate - and may well
      take a pride in doing so.
    
    . A system manager should be able to take such steps as he thinks
      necessary when he thinks them necessary. IFF such steps are
      known, communicated and agreed.
    
    You can have a perfectly laid back approach if you have the necessary
    resources. You can also be Attila Mk II if you haven't - personally
    I went for the Attila approach in the distant past when I did the
    job. I never received any complaints though - because the ground
    rules hit every user straight between the eyes every time they
    logged in. 
    
    It also pays the SM to have a shrewd idea of the work being carried
    out by the users. Not always possible admittedly - but it helps
    to avoid disasters like those noted earlier.
    
    Mike Day
    
1144.28ALOSWS::KOZAKIEWICZShoes for industryThu Jul 19 1990 10:508
    What amazes me is how incredibly easy it is to provoke the lynch mob
    reaction in this conference.  It's gotten to the point where I
    completely blow off notes like .0 because a) in all probability, nothing
    resembling an objective truth is being told and b) the ensuing uproar
    is oh-so-predictible.
    
    Al
    
1144.29Oh boy another policy.SMEGOL::COHENThu Jul 19 1990 11:2411
There is not really a standard way to manage a system.  On some systems, there
are not any disk quota or resource limits (of course, the "users" are expected
to act in a MATURE manner with this freedom).  On other systems, file version
limits of 1 or 2 versions are perfectly acceptable.  More than half the job
of a system manager, imho, is communicating these conditions and setting the
expectations correctly for the users on his system.   A system manager shouldn't
have the right to delete without permission someone's work, but a user doesn't
have the right to monopolize a group resource.  There are no hard and fast rules
for this.
			Bob  
1144.30peer pressureMARVIN::COCKBURNCraig CockburnThu Jul 19 1990 11:3011
FWIW, on Marvin we have reasonable disk quotas

However, we also have a batch job which goes off every hour or so
to let people on each disk know when their disk is over 90% full. 
This batch job generates an opcom message which flashes on everyone
on that disks' terminal. Believe me, these frequent messages are
sufficiently irritating that not only do I keep my quota down, but
if I ever find out who on my disk keeps setting them off, then I'll
ask them to tidy up too, just so that I can work in peace!

	Craig.
1144.31consistency to this conferenceSDSVAX::SWEENEYPatrick Sweeney in New YorkThu Jul 19 1990 13:3316
    re: .28
    
    The DIGITAL conference has a long and honorable history as a place
    where those abused can find supporters and a ready chorus of
    disapproval for the abusers.   Consider it a "battered employee support
    group".  You don't want to have the kangaroo court (no offense
    inteneded to the authors of replies to this note) jumping on you, do
    you?
    
    "banning NOTES"
    ENET addresses on business cards
    company cars
    the open door policy
    salary freeze, thaw, freeze, thaw...
    
    the list goes on forever
1144.32More like the mob that attacked the Bastille... or the Boston Tea Party!COUNT0::WELSHTom Welsh, UK ITACT CASE ConsultantThu Jul 19 1990 14:5453
	re .28:

	One reason the lynch mob heads for the system manager's throat
	is that a lot of them may have suffered at his hands. (Not Tom
	himself, you understand, but the local system manager).

	In my group, we have the top technical experts in the company
	for the UK (about 10% of the business worldwide), in the fields
	of TP, networks, ULTRIX and CASE.. plus some others.

	From casual conversation with these fairly mature, experienced,
	laid-back consultants, I have heard the following:

	(1) One database consultant (internals level in VMS and Rdb)
	    ran out of space in his ALL-IN-1 account. He tried everything,
	    then rang the system manager to ask for help. The system
	    manager found a way to free up some space - he deleted
	    my colleagues NOTES notebook!

	(2) Another consultant, one of the leading data dictionary
	    experts in the field, was doing some work on the tools
	    cluster, field testing a new version of Rdb with CDD/Plus.
	    Suddenly, of the ten or so versions of some of her database
	    files, only the latest was left. Yes, an undeclared PURGE,
	    and it left her starting all over again - minus several days
	    which she couldn't afford (and which the company could have
	    sold at about $1000 a day).

	(3) One of my colleagues discovered the other day (the hard way as
	    usual) that at least some of the accounts on our local system
	    were set to expire after one year. After getting his account
	    back, he asked what had happened - the response was that this
	    practice was designed to ensure that no unused accounts were
	    left on the system if people moved or quit!

	    Meanwhile all his files had been deleted, and it proved too
	    difficult after several tries to get some of them back. So
	    he lost them - permanently.

	This is just the most recent batch of episodes. Basically, some
	of us feel about our friendly system managers the way a zebra
	feels about a lion - natural enemies. It's not surprising, because
	they're not goaled on keeping us happy - just their own managers.

	So please don't be surprised about the reaction. And I still
	insist - it's not OK to delete someone's files without their
	permission. Rather than do that, you get the user and his
	manager in a meeting and let the manager get his commitment
	to take action by losing the required space. It's all part
	of respecting other people's humanity, not treating them like
	objects.

	/Tom
1144.33Seems simple to me.CONFG5::BERMANThu Jul 19 1990 23:1515
    Without going into details of proper System Management I hope that .0
    and others have learned a valuable lesson from this.  Suppose that instead
    of a System Manager deleting most of your work it had been destroyed by
    a hardware malfunction?  Or some disgruntled, malicious employee who
    got redeployed one too many times?       
    
    I think it is irresponsible and unprofessional to not archive your
    work.  If you lose one days work that is unfortunate, but forgiveable. 
    If you lose weeks of work it is *your* performance review and *your*
    reputation that is going to suffer.  If you are not satisfied with the
    archiving being done by the system management folks, then do it
    yourself.  There are many ways to lose data, and nasty system managers
    only account for one way.
    
    /joel
1144.34The fault is not in our users but our softwareWORDY::JONGSteve Jong/T and N PubsFri Jul 20 1990 20:2012
    Several people have suggested ways to live with finite disk space,
    useful suggestions all.  (Though to the person who suggested a 20,000
    block limit on accounts, may I send you a copy of my manual, which in
    PostScript form is 20,407 blocks?  It's PostScript's fault; the
    document has bit-mapped graphics, but it's less than 200 pages long.)
    
    But some of the problem lies with our software.  Unlimited version
    numbers is unreasonable.  Notebook and mail files that can grow without
    limit are unreasonable.  I know people within the company who are
    casual users, and they are not aware of things like these.  It's the
    old user syndrome again: if users "don't understand" the software, the
    fault is not in the users, but in the software.
1144.35PSW::WINALSKIThere&#039;s no hesion like COHESIONSat Jul 21 1990 00:355
RE: .26

The LAST thing this company needs is more policies.  

--PSW
1144.36Methinks it was my note referenced...LOWELL::WAYLAY::GORDONand my imaginary friend Wally...Mon Jul 23 1990 19:0129
Re: .34 

�   ...                    (Though to the person who suggested a 20,000
�    block limit on accounts, may I send you a copy of my manual, which in
�    PostScript form is 20,407 blocks?  It's PostScript's fault; the
�    document has bit-mapped graphics, but it's less than 200 pages long.)

	I think you meant my note, and you probably misunderstood.  On our
cluster, you would get 20k block for your *personal* stuff - mail, notebook,
personal files, etc.  The 20k limit for personal space was inflexible.

	You would also have one or more project directories with a nominal
quota of 100k block.  Quota on the project directories was to insure cleanup
at least once in a while and for accounting purposes.  But realize, even at
1.2 million block per RA82 (and certainly no capital to buy RA90/RA92s) it
doesn't take many users at 100K each, to fill the drive. And your manual is
much more the exception than the rule.

	In addition, users with workstations (twenty or so out of the
40 people in the CC) had disks on their local nodes. (1 RD54, 2 RD53's each.)
The catch there was that they were responsible for backing up anything on
the local disks.  Not may people bothered.

	I've met people with 5k disk quotas for *everything* and no amount
of wailing and gnashing of teeth seems to be able to get them any more.
"Reasonable" looks a lot different depending on where you sit.

						--Doug
    
1144.37Why is disk space *always* a problem?SLIPUP::DMCLUREThe Harvard HackerTue Jul 24 1990 11:0417
	How much more of this sort of pain and agony must we all endure
    before we simply come to the conclusion that lack of available disk
    space is perhaps one of *the* biggest obstacles to worker productivity
    in this company?

	Why not place a higher priority on providing adequate disk space
    to employees?  Why must we constantly skimp on disk space while we
    splurge on other sorts of hardware (including massive tape libraries
    which are necessary to backup all of the things that don't fit on
    disk in the first place)?

	Trying to get anything useful accomplished on the rationed disk
    space that most of us are allotted around here is a little like issuing
    a construction worker who needs to haul loads of lumber a compact car
    to haul them with.  Sure it's possible, but is it practical?

				    -davo
1144.38PSW::WINALSKICareful with that VAX, EugeneSat Jul 28 1990 20:0610
RE: .37

It is a fundamental law of system management that disk space usage grows to
occupy 110% of the space available.  No matter how much room you make available
to people, they will find a way to use all of it.  "Get more disk space" is not
usually the answer, and it is never a cure.  The key is to make sure that there
is enough disk space for reasonable and legitimate use, and then to see to it
that users use the available space reasonably and for legitimate purposes.

--PSW
1144.39It's the system manager's job to do backups - without failCOUNT0::WELSHTom Welsh, freelance CASE ConsultantWed Aug 01 1990 06:3941
	re .33:

>>>    Suppose that instead
>>>    of a System Manager deleting most of your work it had been destroyed by
>>>    a hardware malfunction?  Or some disgruntled, malicious employee who
>>>    got redeployed one too many times? 

	It is one of the responsibilities of a system manager to ensure
	that no user ever loses files due to a hardware malfunction or
	malicious damage. A competent system manager has a plan to provide
	complete backup coverage for all files, going back at least one
	year. (There are exceptions - this task is much more onerous and
	costly than is generally appreciated, and some files or even whole
	disks may be left un-backed-up by arrangement with the users concerned).

>>>    I think it is irresponsible and unprofessional to not archive your
>>>    work.

	This is true if you are working on a standalone system for which you
	are responsible. For instance, I backup COUNT0's two RA70s onto four
	TK70s. Two generations for each of system and user disk, going back
	two weeks. Anything before that - goodbye! I appreciate the risks,
	but I own them and am ready to live with them.

	If, on the other hand, you work on a system which someone else is
	authorized and paid to manage for you (and you are paying the
	invisible price of being an unprivileged user who can't install
	software, set up accounts, change SYSGEN or UAF parameters, etc) -
	then it's irresponsible and unprofessional OF THE SYSTEM MANAGER
	to not archive your work.

>>>	If you are not satisfied with the
>>>     archiving being done by the system management folks, then do it
>>>     yourself.

	No, no, no. If someone else is doing a poor job, the answer is NOT
	to do it yourself. The answer is to have them to a better job.

	This assumes that the job they are not doing well is one that is
	part of their agreed responsibilities. As stated above, I believe
	in this case it is.
1144.40How many "casual users" drive cars without knowing how?COUNT0::WELSHTom Welsh, freelance CASE ConsultantWed Aug 01 1990 06:5750
	re .34:

>>>    But some of the problem lies with our software.  Unlimited version
>>>    numbers is unreasonable.  Notebook and mail files that can grow without
>>>    limit are unreasonable.  I know people within the company who are
>>>    casual users, and they are not aware of things like these.  It's the
>>>    old user syndrome again: if users "don't understand" the software, the
>>>    fault is not in the users, but in the software.

	First up, version numbers are not "unlimited". 32767 is a large
	number, but it can be reached.

	Secondly, would you prefer to have hard limits on the sizes of
	"notebooks and mail files"? How would that help? The user wouldn't
	be stopped from working because he had run out of quota or disk
	space, but because he had hit the limit for his mail file. Seems
	like a distinction without a difference, except that it would be
	MUCH harder to manage, and give the user a lot less flexibility.
	Oh, and it would mean a whole new barrage of commands and features
	to manage this space-limiting capability.

	In general, I do agree that software should be user-friendly, and
	as far as possible intuitive. However, in the real world, you never
	master any new skill or tool without significant learning, which takes
	time and effort.

	In Digital, I think it's disgraceful for anyone at all to say they're
	"a casual user" and they "don't understand". At my local garage, for
	instance, there are very few employees who "don't understand" cars.
	Not many employees of the library "don't know much" about books.
	This company earns its revenue by selling computers and software
	(and some services round the edge), and that money pays all our
	salaries. We'd better know how to use them and what they can and
	can't do.

	(Please note, I'm not asking for everyone to be a UNIX guru or to
	have done a VMS device driver course. Just enough to be able to
	use the stuff without coming to grief. Like the receptionist at
	the garage can DRIVE a car.)

	Lastly, even if the "casual users" don't understand that disk space
	is limited, they have our highly-paid and heavily funded IS
	department to help them with all that, don't they? In their usual
	sympathetic, friendly way, these people always make sure that
	each user understands how to use the features of the system that
	they need. After all, they understand that these lowly "users"
	are really the people who do the company's business and satisfy
	our customers, so that we can all get paid.

	/Tom
1144.41Digital's information explosion has reached our desksCOUNT0::WELSHTom Welsh, freelance CASE ConsultantWed Aug 01 1990 07:5095
	re .34:

>>>	may I send you a copy of my manual, which in
>>>     PostScript form is 20,407 blocks?  It's PostScript's fault; the
>>>     document has bit-mapped graphics, but it's less than 200 pages long.)

	Sure, as long as you send it to my ALL-IN-1 account. I am
	informed by a usually reliable source that mail files are stored by
	ALL-IN-1 in a common area and are not charged to the user's disk
	space quota. This is so that messages can be shared by multiple people.
	(Of course, we know that Notes is a better way to do this, and most
	of those messages being shared by lots of users are agendas for
	meetings that happened five years ago).

	I have seriously considered mailing any large files I don't have
	room for to myself care of ALL-IN-1.

	It seems a bit hard to say "it's PostScript's fault". Yet again, you
	seem to be indulging in wishful thinking and wanting to get something
	for nothing. You want to use less space, use plain text and leave out
	the graphics. It'll take well under 1000 blocks. Use RUNOFF and line
	graphics if that's adequate. But if you REALLY NEED bitmapped graphics,
	there is a price and you must pay it. Is it worth it?

	However, I do think you've touched on a serious problem which we
	aren't handling well - mainly because we haven't even recognized
	it yet. I'll expand on this further on.

	re .36:

>>>	I've met people with 5k disk quotas for *everything* and no amount
>>>	of wailing and gnashing of teeth seems to be able to get them any more.
>>>	"Reasonable" looks a lot different depending on where you sit.

	Sometimes this happens because it wasn't the user, but the user's
	management, who negotiated the space required. Can you imagine a
	district manager sitting in his office, with his pile of mail
	all neatly printed out by his secretary, saying "Oh yes, a
	specialist needs 5,000 blocks and a consultant needs 10,000 blocks".
	Then IS can say "well, that's what your function said you needed".

	The problem that I diagnose here is that the manager doesn't really
	understand what his people do and what resources they reasonably
	need to (a) meet requirements (b) exceed requirements (which is
	what they'd love to do).

	re .37:

>>>	How much more of this sort of pain and agony must we all endure
>>>     before we simply come to the conclusion that lack of available disk
>>>     space is perhaps one of *the* biggest obstacles to worker productivity
>>>     in this company?

	As I said before, I don't think it's simply the lack of space.
	It's the way it's distributed. It ought to be distributed
	according to real need. But it's done mechanically, often by
	people who don't understand the work that's being done and its
	requirements.

	Going back to Steve's PostScript file - I have the same problem,
	and it's been getting worse for a year or two. Even with two RA70s,
	the system disk is full (95%) with products, and the user disk is
	full with pagefile, bookreader, products like DOCUMENT and TRELLIS
	that can be offloaded from the system disk, mail, AVN files, and
	(last but not least) lots of big PS files. Now, as Paul observes,
	with some of this stuff you do use all the available space. The
	trouble is, when you hear of a good manual, presentation or
	report, you tend to FTSV it across to look it over, and once you
	have it you tend to keep it. In just the same way, once you
	print it out (to look it over) you tend to keep the paper (to
	save the time and paper waste of printing it out again when you
	need it). I've run short of disk space and cupboard space.

	What I'd love (and what would make me loads more efficient)
	would be a JIT or "demand paging" method of getting these
	documents. I'd need something like AVN, a file with a set of
	pointers to all the documents, and a means of rapidly getting them
	across the net and printing (or otherwise processing) them. But it
	has to be reliable - I might need a document at one day's notice,
	or overnight.

	The CDA technology, with LiveLinks, promises a lot of this. But
	I haven't noticed a lot of people getting it to work. One major
	problem (non-technical) is the continuous change in Digital. You
	reach out to get that good file from node CASE::, and you find
	it's not there. Weeks later, you find that guy moved to a different
	job and he's now on node BASKET:: with a different public directory.
	If you can't rely on the network, you'll be tempted to get and
	hoard the files just to be sure you have them when you need them.

	Another issue with PS files is cataloguing them and knowing just
	what's in them. We badly need a fast, easy previewer - better still,
	an overviewer.

	/Tom
1144.42Test it and see...SLIPUP::DMCLUREWed Aug 01 1990 11:3915
re: .41,

>	Sure, as long as you send it to my ALL-IN-1 account. I am
>	informed by a usually reliable source that mail files are stored by
>	ALL-IN-1 in a common area and are not charged to the user's disk
>	space quota. This is so that messages can be shared by multiple people.

	I would be interested to know whether or not this is true.  If it
    is true, then that is certainly a good incentive to obtain an ALL-IN-1
    account.  ULTRIX mail accounts function this way as well, but VAXmail
    accounts definitely do not (to test this, I just mailed myself a file
    from another system and watched my disk quota shrink in size on the
    recieving end *before* I even took a look at the mail message).

				    -davo
1144.43Yes, but the quotas get you.NEWVAX::MZARUDZKII am my own VAXWed Aug 01 1990 16:067
     Yes, ALL-IN-1 mail files can be stored in a common area.
    
     However, some ALL-IN-1 systems limit the number of memos you can
    have---it is called quota. :^(
    
    Nice try,
    Mike Z.
1144.44I agree, let's not do other's jobs for them.CONFG5::BERMANWed Aug 01 1990 22:1337
    re .39
    
    What you say is true, but not totally helpful.  Competent backup,
    especially with todays backup products (tapedrives, software) is slow,
    expensive and intrusive.  Secondly there is little protection against
    tampering.  Many backup tapes are re-used.  Few are archived off-site. 
    Even less often are tapes read to ensure the restore will work.  I
    wonder how many sites can recover a file that existed in its proper
    state only on one day 11 months ago?  I'm sure there are some, but I
    don't think there are many.  
   
    My statement about it being irresponsible and unprofessional to not
    archive your work was poorly worded.  I meant to say that it is
    irresponsible and unprofessional to not ensure that your work is being
    properly archived.  Given that few sites do competent archival this
    means figuring out how to get the systems people to change, or doing it
    yourself.
                                                                 
    My comment about doing it yourself is based on my personal reality. 
    Stow has very competent systems folk so I have no current complaints,
    but I have experienced various levels of frustration in trying to get
    procedures changed in previous postions.  I have run the gamut from
    personally doing my own backups, to (when I managed a large group) to
    firing the systems group and hiring my own people.  During the current
    climate it is difficult to cause changes that will incur more cost this
    quarter, even if they can be shown to save money over time.  I have
    only small amounts of time to expend on getting others to do their job
    properly, and the time I do spend usually gets me in trouble somehow. 
    Not to digress, but I recently got flack about using express-mail at
    $10/package for next day delivery, rather than demand shuttle @
    $20/package for usually next day delivery.  Because the $20 is `funny
    money'.  Tell me how to fix the cross-charging system to either track
    real costs, or to flag when the service is internally supplied?  Or
    tell me how to get everyone trained to be able to print Postscript
    files, assuming they can absorb a 20,000 block file?
   
    joel
1144.45I hope DIS doesn't read this...SCAACT::AINSLEYLess than 150 kts. is TOO slowThu Aug 02 1990 10:168
re: .43

Yes, there is a limit on the number of documents most people can save, but you
can use the function that converts a document to a VMS file.  You do lose things
like the rulers and other stuff, but it does allow one to have almost unlimited
storage.

Bob
1144.46Come on now! Get seriousRBW::WICKERTMAA USIS ConsultantFri Aug 03 1990 10:5921
    
    re ALL-IN-1 being used as a place to "hide" large files...
    
    Come on now! Let's use a little common sense and get back to that
    what's good for the company discussion. IS groups have a just as tough
    a time justifing equipment in this day and age as anyone else and
    sucking up disk space because you can't justify it yourself is
    ludicrous! If you told me it's because the file is critical and
    therefore needs a more "protected" enviornment (ie regular backups and
    such) than that's something else. Roll the damn thing off to tape.
    
    You're just creating more workload for a group that is a shorthanded as
    any other. It adds additional overhead to the ALL-IN-1 database which
    is about as touchy a product as I've ever seen.
    
    Sometimes it's games like this that cause system managers to start
    being adversaries vs partners.
    
    -Ray
    
    
1144.47I'm sorry Mr. Customer, but the computer won't let me create your proposal...SCAACT::AINSLEYLess than 150 kts. is TOO slowFri Aug 03 1990 12:528
re: .46

I'm not advocating that as any kind of permanent solution to a problem.  But
when you consider that getting a document quota increase, in some areas, 
requires a cost center managers signature who may be 500 miles away, it's nice
to know that there is a temporary workaround.

Bob
1144.48Technical note -- yes, it IS PostScript's fault!WORDY::JONGSteve Jong/T and N PubsWed Aug 08 1990 15:2225
    Anent .41 (Tom Welsh): Referring to my "20,000-block PostScript file,"
    you commented:
    
    > It seems a bit hard to say "it's PostScript's fault". Yet again, you
    > seem to be indulging in wishful thinking and wanting to get something
    > for nothing. You want to use less space, use plain text and leave out
    > the graphics. It'll take well under 1000 blocks. Use RUNOFF and line
    > graphics if that's adequate. But if you REALLY NEED bitmapped graphics,
    > there is a price and you must pay it. Is it worth it?
    
    Actually, I am very interested in including graphics in my
    documentation.  And yes, it *is* PostScript's fault!  The to-print
    version of my document is 22,040 blocks in PostScript.  The Interleaf
    print file from which the PostScript file was derived is less than
    4,000 blocks.  Maybe Interleaf doesn't convert to PostScript
    efficiently.  Then again, neither does the Apple Macintosh, which can
    produce PostScript files of similar bulkiness.
    
    I understand Adobe has released Version 2 of PostScript, which promises
    to generate smaller graphics files.
    
    At any rate, in the real world we sometimes generate very large files.
    At this very moment, my home disk on my cluster is all but full.
    If I could be doing work, I'd be doing it, but I'm frozen out, so I'm
    replying to you, Tom 8^(
1144.49PS inefficient? I didn't know that!COUNT0::WELSHTom Welsh, freelance CASE ConsultantThu Aug 09 1990 05:2017
	 re .48:

	I see, Steve. You were criticizing PostScript because you feel
	it takes a lot more space than it needs to - not, as I assumed,
	because you didn't understand how much space it takes to store
	bitmapped graphics.

	It sounds as though you know a lot more about the subject than I
	do, so I sit corrected. Thanks for the feedback.

	However, I think my general point is still valid: that the new
	capabilities of displaying and printing graphics and high quality
	text have an associated price in terms of disk space and processing
	power - and that price is quite high. This should lead us to look
	for ways to streamline the use we all make of such material.

	/Tom	
1144.50Yes, actuallyWORDY::JONGSteve Jong/T and N PubsThu Aug 16 1990 18:087
    No problem, Tom!
    
    I suspect we (we technical communicators) would like to make more and
    more use of graphics, bit-mapped and otherwise, and color too.  You're
    right -- the storage costs will be high.  But I am not deterred from
    wanting to use them!  I just hope someone finds a way to streamline
    storage and transmission.