[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference decwet::hsm-4-unix

Title:HSM for UNIX Platforms
Notice:Kit Info in note 2.1 -- Public Info Pointer in 3.1
Moderator:DECWET::TRESSEL
Created:Fri Jul 08 1994
Last Modified:Wed Jun 04 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:238
Total number of notes:998

228.0. "meaning of messages from qjmd -s" by FURKA::"[email protected]" (Wil Bergman) Fri Jan 17 1997 00:56

T.RTitleUserPersonal
Name
DateLines
228.1DECWET::TRESSELPat TresselFri Jan 17 1997 12:5340
228.2Naswers to your questionsNETRIX::"[email protected]"Wil BergmanWed Jan 22 1997 23:33203
228.3DECWET::TRESSELPat TresselThu Jan 23 1997 13:40201
228.4answer on 228.3NETRIX::"[email protected]"Wil BergmanMon Jan 27 1997 00:4483
Pat,

	customer supplied me with the info your reqested.

>Does their LOG_MAIL line look like this?
>
>  # fgrep LOG_MAIL /usr/efs/efscfg
>  LOG_MAIL /usr/bin/mail -r qr root
>

within the file /usr/efs/efscfg: LOG_MAIL /usr/bin/mail -r qr root, has been
defined.

The only thing i ever got was on 10 januari 1997:


Date: Fri, 10 Jan 1997 13:12:55 +0100

An attempt to migrate a file of size 0 from file system 101 failed.
This may be due to:
 * A lack of room in the jukebox.
 * There may not be any platters assigned to this EFS and none left 
   in the free pool.
 * The platters assigned to this EFS are not owned by the qr (owner 5).
 * All platters may be in a non-INTERNAL state.
 * Otherwise qualified platters may all have Guest IDs.
Please check the status of the jukebox platters, add more if necessary, 
and/or consult with Digital Customer Support.  jmd reports 'Platter not
found.'

At that time the system had been rebooted.

>
>They *do* have a rather large number of "invalid state"s though:
>
>> 101  12238  JE_INVSTATE
>
>Has the system been up a long time?  These counters are not cleared
>except
>

No the system is now up since: januari 10th, 1997

17:03  up 14 days,  3:54,  1 user,  load average: 0.00, 0.00, 0.00

>Is this number increasing? or does it stay the same?  

The number is currently still increasing every minute.


>So please ask if they're doing backup, and if not, encourage them to do
>so.
>If they are using NetWorker for backup, then there's a simple trick to
>avoid
>unshelving files during backup, which I can tell you about if it's
>wanted.
>

Yes, we'r doing backup with NetWorker. And I'am intrested in the simple trick.

>
>Second (and this one is for this case specifically):  Are they getting
>any
>errors to commands, or errors in their jlog or qrlog or agerlog?  (Look
>for
>the word ERROR in these logs.)  That is, are they seeing any *symptoms*
>to go
>along with the high "invalid state" count?
>- 
>

I have executed a 'grep ERROR' on jlog, qrlog and agerlog, but no errors are
reported so far.

We are however also running the AgerBulkMode for 10 hours every sunday. Has
this anything to do with this situation. Would the reboot has caused it, I do
however not known whether this situation was before the reboot valid.



Kind Regards, Wil

[Posted by WWW Notes gateway]
228.5DECWET::TRESSELPat TresselMon Jan 27 1997 19:4573
Wil --

> within the file /usr/efs/efscfg: LOG_MAIL /usr/bin/mail -r qr root, has been
> defined

That's fine.  Turns out HSM doesn't send mail *every* time it sees a platter
in an inappropriate state for some command:  If the command is given by a
user, so that HSM can write an error message to their screen, it doesn't
bother with e-mail.

> An attempt to migrate a file of size 0 from file system 101 failed.

That is almost surely unrelated to the "invalid state" count.  HSM doesn't
migrate empty files.  (Because they have no contents, it wouldn't save any
space on magnetic to migrate them.  We now wish that we'd had HSM at
least record the empty file's directory information -- path, protections,
dates, etc. -- on optical, so it would be available in case the filesystem
had to be rebuilt from the platters.)  When HSM is asked to migrate an
empty file, it returns an error.  The rest of the message is just the standard
message for any migration failure -- none of it applies in this case.

> The number is currently still increasing every minute.

> I have executed a 'grep ERROR' on jlog, qrlog and agerlog, but no errors are
> reported so far.

It's good that there are no errors.  We will probably need to change the
"log level" to see messages that are not errors.  To do that, change the
value on the LOG_LEVEL lines in efscfg and jmcfg to 6, then reconfigure
the qr and jmd.  (You can do this either by editing efscfg and jmcfg with
an ordinary text editor, then doing qqr -r and qjmd -r, or by using
edqr and edjm, which will reconfigure the qr and jmd when they exit.  If
edqr and edjm complain that the new log level value is not an allowed value,
make sure to delete the old value before typing the new one.)

With the log levels set to 6, HSM will record *everything*, so the logs will
get big.  So don't leave the log levels set to 6 for very long.  After setting
them to 6, wait for the JE_INVSTATE count to go up a few times, then set the
log levels back to what they were before (4 in efscfg and 2 in jmcfg).  Then
clip out the sections of the logs for the time they were set to level 6
(look for the word "reconfigured" to see when the level was changed), and
send those pieces to me.

Could you check if the active ager is running while the JE_INVSTATE count
is going up?  For instance, you could do "ps auxww | grep ager".  Also please
do qqr at a time when the count is going up.  Repeat qqr several times,
over a period long enough that the JE_INVSTATE count goes up.  (It would be
nice to catch a request in the queue, but if it doesn't, we'll see it in
efscfg -- it's just a little more obscurely worded there...)

Thanks!

> Yes, we're doing backup with NetWorker. And I'am intrested in the simple
> trick.

This was invented by one of our other customers, who has *very* large HSM
filesystems:  Do a single full backup (this could even be done when the
filesystem is first created, and is empty).  After that do only day-to-day
incremental backups.  Since the files created or changed each day are likely
to still be on magnetic, the backup will not need to read them from optical.

This would mean there might be current data even on very old tapes.  That
would require setting the index retention period to something large.  But
there's a way around that, which is to get the NetWorker Saveset Consolidation
feature (which is separately licensed -- it's only available for DEC NetWorker
servers).  Then Saveset Consolidation can construct a full backup out of all
the old incrementals and write it onto another tape (so this needs a tape
jukebox with at least two drives).  This would allow older incremental tapes
to be retired.  And as long as Saveset Consolidation was run more frequently
than the index retention period they use now, they'd be able to browse current
files, and still not have the indices get too large.

-- Pat
228.6Reply on 228.4NETRIX::"[email protected]"Wil BergmanTue Jan 28 1997 05:291209
Pat, 
	the customer has executed your suggestion and here are the results :

=======================Begin of Output to jlog
================================

 Debug2 (  833):  Received function 13 'getmount'
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
  Debug (  833):  scheduler:  function is 13 (getmount)
  Debug (  833):  01/28/97 11:26:56 +++++ Executing function 13
(getmount)+++++
History (  833):  jgetdev(13,0x00ff009a)
 Debug2 (  833):  jmgetdev:  /mo1
  Debug (  833):  jgetdev rp 0x14005aec0  size 5  free 1  '/mo1'

 Debug2 (  833):  jmgetdev:  /mo1
  Debug (  833):  jgetdev rp 0x14005aec0  size 5  free 1  '/mo1'
 Debug2 (  833):  unselect_resources()
 Debug2 (  833):  REPLY3:  0x140057270
 Debug2 (  833):  Calling zsendreply()  reply 0x40057270 msg 0x40023ed8 size
89
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  zgetnextmsg:  zerrno 0 errno 0
 Debug2 (  833):  REPLY1:  0x140055210
 Debug2 (  833):  mess:  0x14005af00  size 84
 Debug2 (  833):  jcom_receive:  message ok
 Debug2 (  833):  jcom_alloc:  d 0x14005af14  dsize 64  p 0x14005af54  psize 0
 Debug2 (  833):  jcom_alloc:  malloc returned 40 bytes
 Debug2 (  833):  jcom_alloc:  malloc returned 64 bytes for d
 Debug2 (  833):  jcom_receive:  jcom_alloc ok
 Debug2 (  833):  message does NOT need reply

 Debug2 (  833):  jcom_receive:  jcom_alloc ok
 Debug2 (  833):  message does NOT need reply
 Debug2 (  833):  REPLY2:  0x0
 Debug2 (  833):  jcom_receive:  returning valid message
 Debug2 (  833):  Received function 50 'usedvspace'
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
  Debug (  833):  scheduler:  function is 50 (usedvspace)
   Note (  833):  check_access(0,0,0,0,00)
  Debug (  833):  01/28/97 11:26:56 +++++ Executing function 50
(usedvspace)+++++
History (  833):  jusedvspace(0x00ff009a,24576)
  Debug (  833):  01/28/97 11:26:56 jmusedvspace: returning 24576 bytes (3
blocks) to box 0, drive 1, for media ff009a
  Debug (  833):  01/28/97 11:26:56 jmusedvspace: usedvspace = 3
 Debug2 (  833):  unselect_resources()
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  zgetnextmsg:  zerrno 0 errno 0
 Debug2 (  833):  REPLY1:  0x140055210

 Debug2 (  833):  zgetnextmsg:  zerrno 0 errno 0
 Debug2 (  833):  REPLY1:  0x140055210
 Debug2 (  833):  mess:  0x14005af00  size 84
 Debug2 (  833):  jcom_receive:  message ok
 Debug2 (  833):  jcom_alloc:  d 0x14005af14  dsize 64  p 0x14005af54  psize 0
 Debug2 (  833):  jcom_alloc:  malloc returned 40 bytes
 Debug2 (  833):  jcom_alloc:  malloc returned 64 bytes for d
 Debug2 (  833):  jcom_receive:  jcom_alloc ok
 Debug2 (  833):  message does NOT need reply
 Debug2 (  833):  REPLY2:  0x0
 Debug2 (  833):  jcom_receive:  returning valid message
 Debug2 (  833):  Received function 50 'usedvspace'
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  zgetnextmsg:  zerrno 0 errno 0
 Debug2 (  833):  REPLY1:  0x140057000
 Debug2 (  833):  mess:  0x14005af00  size 84
 Debug2 (  833):  jcom_receive:  message ok
 Debug2 (  833):  jcom_alloc:  d 0x14005af14  dsize 64  p 0x14005af54  psize 0
 Debug2 (  833):  jcom_alloc:  malloc returned 40 bytes
 Debug2 (  833):  jcom_alloc:  malloc returned 64 bytes for d
 Debug2 (  833):  jcom_receive:  jcom_alloc ok
 Debug2 (  833):  message needs reply
 Debug2 (  833):  REPLY2:  0x40057000

 Debug2 (  833):  message needs reply
 Debug2 (  833):  REPLY2:  0x40057000
 Debug2 (  833):  jcom_receive:  returning valid message
 Debug2 (  833):  Received function 14 'statplat'
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
  Debug (  833):  scheduler:  function is 14 (statplat)
  Debug (  833):  01/28/97 11:26:56 +++++ Executing function 14
(statplat)+++++
 Debug2 (  833):  jstatplat(0x00ff009a)
 Debug2 (  833):  unselect_resources()
 Debug2 (  833):  REPLY3:  0x140057000
 Debug2 (  833):  Calling zsendreply()  reply 0x40057000 msg 0x40023ed8 size
324
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
  Debug (  833):  scheduler:  function is 50 (usedvspace)
   Note (  833):  check_access(0,0,0,0,00)
  Debug (  833):  01/28/97 11:26:56 +++++ Executing function 50
(usedvspace)+++++
History (  833):  jusedvspace(0x00ff009a,24576)
  Debug (  833):  01/28/97 11:26:56 jmusedvspace: returning 24576 bytes (3
blocks) to box 0, drive 1, for media ff009a

History (  833):  jusedvspace(0x00ff009a,24576)
  Debug (  833):  01/28/97 11:26:56 jmusedvspace: returning 24576 bytes (3
blocks) to box 0, drive 1, for media ff009a
  Debug (  833):  01/28/97 11:26:56 jmusedvspace: usedvspace = 6
 Debug2 (  833):  unselect_resources()
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
 Debug2 (  833):  jcom_receive()
  Debug (  833):  Client 6 went away
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  zgetnextmsg:  zerrno 0 errno 0
 Debug2 (  833):  REPLY1:  0x140057270
 Debug2 (  833):  mess:  0x14005af00  size 84
 Debug2 (  833):  jcom_receive:  message ok
 Debug2 (  833):  jcom_alloc:  d 0x14005af14  dsize 64  p 0x14005af54  psize 0
 Debug2 (  833):  jcom_alloc:  malloc returned 40 bytes
 Debug2 (  833):  jcom_alloc:  malloc returned 64 bytes for d
 Debug2 (  833):  jcom_receive:  jcom_alloc ok

 Debug2 (  833):  jcom_alloc:  malloc returned 64 bytes for d
 Debug2 (  833):  jcom_receive:  jcom_alloc ok
 Debug2 (  833):  message needs reply
 Debug2 (  833):  REPLY2:  0x40057270
 Debug2 (  833):  jcom_receive:  returning valid message
 Debug2 (  833):  Received function 14 'statplat'
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
  Debug (  833):  scheduler:  function is 14 (statplat)
  Debug (  833):  01/28/97 11:26:57 +++++ Executing function 14
(statplat)+++++
 Debug2 (  833):  jstatplat(0x00ff009a)
 Debug2 (  833):  unselect_resources()
 Debug2 (  833):  REPLY3:  0x140057270
 Debug2 (  833):  Calling zsendreply()  reply 0x40057270 msg 0x40023ed8 size
324
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
 Debug2 (  833):  jcom_receive()
  Debug (  833):  Client 5 went away
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0

 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  zgetnextmsg:  zerrno 0 errno 0
 Debug2 (  833):  REPLY1:  0x140055210
 Debug2 (  833):  mess:  0x14005af00  size 84
 Debug2 (  833):  jcom_receive:  message ok
 Debug2 (  833):  jcom_alloc:  d 0x14005af14  dsize 64  p 0x14005af54  psize 0
 Debug2 (  833):  jcom_alloc:  malloc returned 40 bytes
 Debug2 (  833):  jcom_alloc:  malloc returned 64 bytes for d
 Debug2 (  833):  jcom_receive:  jcom_alloc ok
 Debug2 (  833):  message needs reply
 Debug2 (  833):  REPLY2:  0x40055210
 Debug2 (  833):  jcom_receive:  returning valid message
 Debug2 (  833):  Received function 2 'umount'
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
 Debug2 (  833):  Trying to select function 'umount'
 Debug2 (  833):  select_resources(flags 0xa0b)

 Debug2 (  833):  Trying to select function 'umount'
 Debug2 (  833):  select_resources(flags 0xa0b)
 Debug2 (  833):  JSELPLAT
 Debug2 (  833):  select_platter(00ff009a,1,2)
 Debug2 (  833):  Platter ff009a selected
 Debug2 (  833):  JSELBOX
 Debug2 (  833):  select_box(0,6)
 Debug2 (  833):  Jukebox 0 selected
 Debug2 (  833):  JSELDRIVE
 Debug2 (  833):  Selecting platter's drive 1
 Debug2 (  833):  select_drive(0,1,1)
 Debug2 (  833):  Drive 1 box 0 selected
  Debug (  833):  01/28/97 11:26:57 findid():  Unable to find id 5 in box 0
  Debug (  833):  01/28/97 11:26:57 our_drive(box 0, drive 1, id 5)
  Debug (  833):  01/28/97 11:26:57 our_drive(): Box 0, drive 1 has no
specific references by id 5
  Debug (  833):  01/28/97 11:26:57 our_drive():  Box 1.434143e+00, drive 1
has 1 unallocated references
 Debug2 (  833):  Selection OK
  Debug (  833):  scheduler:  function is 2 (umount)
   Note (  833):  check_access(0,0,0,0,00)
  Debug (  833):  01/28/97 11:26:57 ******** Forking function code 2 ********
  Debug (29844):  01/28/97 11:26:57 mediatype = 1

  Debug (  833):  01/28/97 11:26:57 ******** Forking function code 2 ********
  Debug (29844):  01/28/97 11:26:57 mediatype = 1
  Debug (29844):  01/28/97 11:26:57 plattersw[0].mediatype = 1
  Debug (29844):  01/28/97 11:26:57 plattersw[1].mediatype = 8
  Debug (29844):  01/28/97 11:26:57 +++++ Executing function 2 (umount) +++++
History (29844):  jumount(0x00ff009a,"(null)",0x2,0)
  Debug (29844):  01/28/97 11:26:57 ufs_info
 Debug2 (29844):  sendback()
 Debug2 (29844):  Sending message back to main jmd
 Debug2 (29844):  jcom_sendback:  d 0x11ffffc50  stype 1  sp 0x140059500
spsize 240
  Debug (29844):  jcom_sendback:  trying zcreate_con
 Debug2 (29844):  jcom_sendback:  Sending 324 byte message
 Debug2 (29844):  sendback()
 Debug2 (29844):  Sending message back to main jmd
 Debug2 (29844):  jcom_sendback:  d 0x11ffffc50  stype 3  sp 0x14005b140
spsize 296
 Debug2 (29844):  jcom_sendback:  Sending 380 byte message
 Debug2 (29844):  sendback()
 Debug2 (29844):  Sending message back to main jmd
 Debug2 (29844):  jcom_sendback:  d 0x11ffffd70  stype 0  sp 0x0  spsize 0
 Debug2 (29844):  jcom_sendback:  Sending 84 byte message
  Debug (29844):  01/28/97 11:26:57 -------- Exiting process 29844 --------

 Debug2 (29844):  jcom_sendback:  Sending 84 byte message
  Debug (29844):  01/28/97 11:26:57 -------- Exiting process 29844 --------
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
   Note (  833):  Child 29844 died status 0x0
 Debug2 (  833):  sendback()
 Debug2 (  833):  sendback() in main jmd
 Debug2 (  833):  jcom_alloc:  d 0x11ffffd80  dsize 64  p 0x0  psize 0
 Debug2 (  833):  jcom_alloc:  malloc returned 40 bytes
 Debug2 (  833):  jcom_alloc:  malloc returned 64 bytes for d
 Debug2 (  833):  sendback:  jcom_alloc() returned OK
 Debug2 (  833):  sendback:  calling handle_request
 Debug2 (  833):  Received function 84 'killed'
  Debug (  833):  Scheduled request timeout 30 at 0x32edd471 stamp 0x32edd48f
 Debug2 (  833):  sendback:  handle_request returned OK
 Debug2 (  833):  select_request()
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  zgetnextmsg:  zerrno 0 errno 0
 Debug2 (  833):  REPLY1:  0x140057180
 Debug2 (  833):  mess:  0x140057300  size 324
 Debug2 (  833):  jcom_receive:  message ok
 Debug2 (  833):  jcom_alloc:  d 0x140057314  dsize 64  p 0x140057354  psize
240
 Debug2 (  833):  jcom_alloc:  malloc returned 40 bytes

 Debug2 (  833):  jcom_alloc:  d 0x140057314  dsize 64  p 0x140057354  psize
240
 Debug2 (  833):  jcom_alloc:  malloc returned 40 bytes
 Debug2 (  833):  jcom_alloc:  malloc returned 64 bytes for d
 Debug2 (  833):  jcom_alloc:  malloc returned 240 bytes for p
 Debug2 (  833):  jcom_receive:  jcom_alloc ok
 Debug2 (  833):  message does NOT need reply
 Debug2 (  833):  REPLY2:  0x0
 Debug2 (  833):  jcom_receive:  returning valid message
 Debug2 (  833):  Received function 80 'upplatter'
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  zgetnextmsg:  zerrno 0 errno 0
 Debug2 (  833):  REPLY1:  0x1400571b0
 Debug2 (  833):  mess:  0x140056a00  size 380
 Debug2 (  833):  jcom_receive:  message ok
 Debug2 (  833):  jcom_alloc:  d 0x140056a14  dsize 64  p 0x140056a54  psize
296
 Debug2 (  833):  jcom_alloc:  malloc returned 40 bytes
 Debug2 (  833):  jcom_alloc:  malloc returned 64 bytes for d
 Debug2 (  833):  jcom_alloc:  malloc returned 296 bytes for p
 Debug2 (  833):  jcom_receive:  jcom_alloc ok
 Debug2 (  833):  message does NOT need reply
 Debug2 (  833):  REPLY2:  0x0
 Debug2 (  833):  jcom_receive:  returning valid message
 Debug2 (  833):  Received function 81 'updrive'

 Debug2 (  833):  jcom_receive:  returning valid message
 Debug2 (  833):  Received function 81 'updrive'
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  zgetnextmsg:  zerrno 0 errno 0
 Debug2 (  833):  REPLY1:  0x140057360
 Debug2 (  833):  mess:  0x14005af80  size 84
 Debug2 (  833):  jcom_receive:  message ok
 Debug2 (  833):  jcom_alloc:  d 0x14005af94  dsize 64  p 0x14005afd4  psize 0
 Debug2 (  833):  jcom_alloc:  malloc returned 40 bytes
 Debug2 (  833):  jcom_alloc:  malloc returned 64 bytes for d
 Debug2 (  833):  jcom_receive:  jcom_alloc ok
 Debug2 (  833):  message does NOT need reply
 Debug2 (  833):  REPLY2:  0x0
 Debug2 (  833):  jcom_receive:  returning valid message
 Debug2 (  833):  Received function 83 'complete'
 Debug2 (  833):  jcom_receive()
  Debug (  833):  Client 6 went away
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
  Debug (  833):  scheduler:  function is 80 (upplatter)
  Debug (  833):  01/28/97 11:26:57 +++++ Executing function 80
(upplatter)+++++
  Debug (  833):  jupplatter()

  Debug (  833):  01/28/97 11:26:57 +++++ Executing function 80
(upplatter)+++++
  Debug (  833):  jupplatter()
  Debug (  833): 
jmupplatter(0xff009a,0xff009b,0xffffffffffffffff,-1,29844,0x8)
 Debug2 (  833):  copyjrec:  0xff009a to 0xff009a
 Debug2 (  833):  copyjside:  0xff009a to 0xff009a
  Debug (  833):  01/28/97 11:26:57 copyjside: vf(55229) += 6
 Debug2 (  833):  copyjside:  0xff009b to 0xff009b
  Debug (  833):  checksignal(0x200)
  Debug (  833):  checksignal action number 9
 Debug2 (  833):  unselect_resources()
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
  Debug (  833):  scheduler:  function is 81 (updrive)
  Debug (  833):  01/28/97 11:26:57 +++++ Executing function 81 (updrive)
+++++
  Debug (  833):  jupdrive()
  Debug (  833):  jmupdrive(0,1,29844,0x2)
  Debug (  833):  checksignal(0x400)
  Debug (  833):  checksignal action number 10
 Debug2 (  833):  unselect_resources()
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0

 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
  Debug (  833):  scheduler:  function is 83 (complete)
  Debug (  833):  01/28/97 11:26:57 +++++ Executing function 83
(complete)+++++
  Debug (  833):  jcomplete(pid 29844)
 Debug2 (  833):  complete_request:  function 2
 Debug2 (  833):  unselect_resources()
 Debug2 (  833):  platter 00ff009a 00ff009b unselected
  Debug (  833):  01/28/97 11:26:57 findid():  Unable to find id 5 in box 0
  Debug (  833):  01/28/97 11:26:57 giveup_drive(box 0, drive 1, id 5)
  Debug (  833):  01/28/97 11:26:57 giveup_drive():  giving up box 0, drive 1
to 'unallocated'
 Debug2 (  833):  Drive 1 box 0 unselected
 Debug2 (  833):  Box 0 unselected
  Debug (  833):  checksignal(0x2000)
  Debug (  833):  checksignal action number 13
 Debug2 (  833):  unselect_resources()
 Debug2 (  833):  REPLY3:  0x140055210
 Debug2 (  833):  Calling zsendreply()  reply 0x40055210 msg 0x40023ed8 size
84
 Debug2 (  833):  jcom_receive()
  Debug (  833):  Client 5 went away
 Debug2 (  833):  receive_request() returning 0

  Debug (  833):  Client 5 went away
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  zgetnextmsg:  zerrno 0 errno 0
 Debug2 (  833):  REPLY1:  0x140055480
 Debug2 (  833):  mess:  0x14005af00  size 84
 Debug2 (  833):  jcom_receive:  message ok
 Debug2 (  833):  jcom_alloc:  d 0x14005af14  dsize 64  p 0x14005af54  psize 0
 Debug2 (  833):  jcom_alloc:  malloc returned 40 bytes
 Debug2 (  833):  jcom_alloc:  malloc returned 64 bytes for d
 Debug2 (  833):  jcom_receive:  jcom_alloc ok
 Debug2 (  833):  message needs reply
 Debug2 (  833):  REPLY2:  0x40055480
 Debug2 (  833):  jcom_receive:  returning valid message
 Debug2 (  833):  Received function 26 'pid'
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
  Debug (  833):  scheduler:  function is 26 (pid)
  Debug (  833):  01/28/97 11:26:57 +++++ Executing function 26 (pid) +++++

  Debug (  833):  scheduler:  function is 26 (pid)
  Debug (  833):  01/28/97 11:26:57 +++++ Executing function 26 (pid) +++++
History (  833):  jpid()
 Debug2 (  833):  unselect_resources()
 Debug2 (  833):  REPLY3:  0x140055480
 Debug2 (  833):  Calling zsendreply()  reply 0x40055480 msg 0x40023ed8 size
84
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0
 Debug2 (  833):  select_request()
 Debug2 (  833):  jcom_receive()
 Debug2 (  833):  receive_request() returning 0


======================End of Output to jlog ================================

======================Begin of Output to qrlog
================================

qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1707008, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1717760, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1728512, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1739264, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1750016, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8 
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1760768, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1771520, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1782272, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1793024, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1803776, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1814528, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1825280, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1836032, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1846784, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1857536, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1868288, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1879040, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1889792, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1900544, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1911296, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1922048, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1932800, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1943552, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1954304, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1965056, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1975808, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1986560, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 1997312, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2008064, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2018816, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2029568, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2040320, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2051072, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2061824, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2072576, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2083328, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2094080, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2104832, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2115584, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2126336, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2137088, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2147840, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2158592, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2169344, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2180096, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2190848, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2201600, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2212352, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2223104, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2233856, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2244608, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2255360, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2266112, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2276864, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2287616, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2298368, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2309120, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2319872, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2330624, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2341376, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2352128, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2362880, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2373632, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2384384, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2395136, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2405888, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2416640, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2427392, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2438144, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2448896, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2459648, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2470400, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2481152, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2491904, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2502656, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2513408, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2524160, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2534912, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2545664, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2556416, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2567168, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2577920, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2588672, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2599424, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2610176, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2620928, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2631680, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2642432, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2653184, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2663936, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2674688, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2685440, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2696192, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2706944, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2717696, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2728448, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2739200, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2749952, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2760704, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2771456, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2782208, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2792960, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2803712, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2814464, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2825216, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2835968, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2846720, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2857472, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2868224, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2878976, Dir = 1
qr   Debug (29545):  01/28/97 11:25:56 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:56 ReadInfo - offset = 2889728, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 2900480, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 2911232, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 2921984, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 2932736, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 2943488, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 2954240, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 2964992, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 2975744, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 2986496, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 2997248, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3008000, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3018752, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3029504, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3040256, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3051008, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3061760, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3072512, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3083264, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3094016, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3104768, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3115520, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3126272, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3137024, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3147776, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3158528, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3169280, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3180032, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3190784, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3201536, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3212288, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3223040, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3233792, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3244544, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3255296, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3266048, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3276800, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3287552, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3298304, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3309056, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3319808, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3330560, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3341312, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3352064, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3362816, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3373568, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3384320, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3395072, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3405824, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3416576, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3427328, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3438080, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3448832, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3459584, Dir = 1
qr   Debug (29545):  01/28/97 11:25:57 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:57 ReadInfo - offset = 3470336, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3481088, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3491840, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3502592, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3513344, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3524096, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3534848, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3545600, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3556352, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3567104, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3577856, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3588608, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3599360, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3610112, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3620864, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo6
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3631616, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo8
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3642368, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo0
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3653120, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo2
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3663872, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ODMReadRefnumByOffset - calling
ReadInfo4
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - offset = 3674624, Dir = 1
qr   Debug (29545):  01/28/97 11:25:58 ReadInfo - EOF
qr   Debug (29545):  01/28/97 11:25:58 ODMWriteFile - ThisRec.MagFileName =
01_1
qr   Debug (29545):  01/28/97 11:25:58 InvAdd - inventory file
/mo1/101/14/DirIv
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - offset = 512, offset_incr =
2
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - begin_offset = 8192
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 8192
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 18944
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 29696
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 40448
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 51200
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 61952
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 72704
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 83456
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 94208
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 104960
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 115712
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 126464
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 137216
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 147968
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 158720
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 169472
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 180224
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 190976
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 201728
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 212480
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 223232
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 233984
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 244736
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 255488
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 266240
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 276992
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 287744
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 298496
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 309248
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 320000
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 330752
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 341504
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 352256
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 363008
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 373760
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 384512
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 395264
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 406016
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 416768
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 427520
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 438272
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 449024
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 459776
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 470528
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 481280
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 492032
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 502784
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 513536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 524288
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 535040
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 545792
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 556544
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 567296
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 578048
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 588800
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 599552
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 610304
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 621056
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 631808
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 642560
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 653312
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 664064
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 674816
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 685568
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 696320
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 707072
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 717824
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 728576
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 739328
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 750080
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 760832
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 771584
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 782336
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 793088
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 803840
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 814592
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 825344
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 836096
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - bytesread = 1536
qr   Debug (29545):  01/28/97 11:25:58 WriteInfo - lseek() to 846848

======================End of Output to qrlog ================================


>
>Could you check if the active ager is running while the JE_INVSTATE
>count
>is going up?  For instance, you could do "ps auxww | grep ager".  Also
>please
>

Done; The ager is not running.

>
>do qqr at a time when the count is going up.  Repeat qqr several times,
>over a period long enough that the JE_INVSTATE count goes up.  (It would
>be
>nice to catch a request in the queue, but if it doesn't, we'll see it in
>efscfg -- it's just a little more obscurely worded there...)
>
>Thanks!
>

Done: The output follows:

============================ start of output of qqr
===========================

nlcmpd1 $ qqr
qr (838) up at Fri Jan 10 13:11:27 1997

qr status: Active

Current Platter Time Quantum: 180 seconds

Priority 1

Priority 2

Priority 3

Priority 4

Priority 5

Priority 6

Priority 7

Priority 8

Priority 9

Priority 10

Following file systems are being serviced.

Fsid     TimeRef  BSize HWM  Mountpt
----     -------  ----- ---  -------
101      30f632c1 8192   90% /nlcmpd1_1 

Platters currently being serviced.

Platter ff009a (served by Child 29205).
Current Requests:
Make Directory 
        <fs=101, platter=ff009a, child i/g=655001/32769, parent
i/g=8021/32771>

New Requests:
Make Directory 
        <fs=101, platter=ff009a, child i/g=655002/32769, parent
i/g=655001/3276>

Checkpoint level is currently 10.
Checkpoint Interval is currently 3.

============================ end of output of qqr ===========================


Hope this info will suffice.

Kind Regards, WIl
[Posted by WWW Notes gateway]
228.7DECWET::TRESSELPat TresselTue Jan 28 1997 14:0373
Wil --

The only thing I can see in the jlog output is that there appear to be two
requests for platter side "5" -- that's a temporary id, given by HSM to an
as-yet-unnamed platter.  There isn't any such platter id in their jukebox now,
though there probably was at some time in the past.  There may be some file
that AdvFS believes is backed to that no-longer-existing platter, so that
AdvFS is continually re-issuing some request.  To rule this latter possibility
out (or confirm it), they can use shelvefind to ask AdvFS for any files backed
to platter side "5".  For instance:

  shelvefind /nlcmpd1_1 -plat 5 -ls > files_on_platter_side_5 &

This may take a while to run...

                                 * * *

If that doesn't turn anything up, we'll probably need to collect the log info
again, because...

Unfortunately, the times don't overlap on the clippings from the jlog and
qrlog -- the jlog is at about 11:26, but the qrlog ends before then.  And
the qqr output was for several hours later.  What I need is to have them
all for the same time period, and to have a long enough period to see several
instances of whatever is incrementing JE_INVSTATE.

Since it's not errors I'm looking for, nor even warnings, what I'm hoping to
do is compare the operations that show up in the log with the number of times
JE_INVSTATE gets incremented.  So it would be best to know what the count is
at several times during the time the log_level is set to 6.

If they are willing to re-do it, could they let it run for a bit longer, and
also find out the count of errors at a few times, far enough apart that it will
have incremented several times inbetween?  Since qjmd does not report the
time, they could, for instance, put "date" and "qjmd" in a script.  This would
collect the count and the date closer together than typing the commands by hand
would.  And they may as well throw the qqr command into this script as well.

The script could be something like:

  #! /bin/sh
  for xxx in 1 2 3 4 5 6
  do
    date
    qjmd
    qqr
    date
    sleep 30
  done

This will run for 3 minutes -- if they don't think that's long enough for
the count to go up (say) 10 times, they can make it longer by adding more
numbers to the for loop.

So the process is:

  Set the log level in efscfg and jmcfg to 6.

  Run the script and capture the output, e.g. if they call the script
  get_counts:
    get_counts > get_counts.log

  Set the log level back down.

  Send me the pieces of jlog and qrlog that were at level 6, surrounding the
  time the script was active, and the script output.

                                 * * *

Since it's shorter, have them try shelvefind first, and only collect the log
data if shelvefind doesn't find anything.

-- Pat
228.8Your requested infoNETRIX::&quot;[email protected]&quot;Wil BergmanThu Feb 06 1997 03:197
Pat, I received info from the customer.
It is rather large, so I mailed it to you.
Maybe it would be wise to enter the relevant parts lateron in this note.
Sothat others can benefit from your expertise.

Kind Regards, Wil
[Posted by WWW Notes gateway]
228.9DECWET::TRESSELPat TresselThu Feb 06 1997 18:215
Wil --

Thanks!  I got the info, and am looking at it.

-- Pat
228.10JE_INVSTATE is ok, but those requests that aren't being processed are badDECWET::TRESSELPat TresselSun Feb 09 1997 02:17147
Wil --

It turns out that JE_INVSTATE doesn't just mean that a platter had the
wrong group or owner, etc. -- it can also mean that the qr asked for
space on a particular platter side, and that side was full.  In the jlog
were many lines like this:

   Note (  833):  reservevs:  platter ff001a failed side criteria

Side ff001a is full.  If the qr can't get the platter it "wants", it asks
the jmd to pick the best platter to use.  Here, the jmd picked ff009a
consistently.

So, the JE_INVSTATE count, and the reservevs failure count are not a
problem by themselves.  But...

All those "Make Directory" requests that show up in the qqr output look
like they *are* a problem.  There are hundreds of them, and they all follow
the same pattern:  The requests are in pairs -- the second "Make Directory"
of each pair is a child directory in the directory that appears in the first
"Make Directory".  For instance:

  Make Directory
      <fs=101, platter=ff009a, child i/g=710587/32769, parent i/g=8025/32771>
  Make Directory
      <fs=101, platter=ff009a, child i/g=710588/32769, parent i/g=710587/32769>
  Make Directory
      <fs=101, platter=ff009a, child i/g=710590/32769, parent i/g=8025/32771>
  Make Directory
      <fs=101, platter=ff009a, child i/g=710591/32769, parent i/g=710590/32769>

The inodes of these new directories are almost in numerical sequence -- there
are not many gaps between them.  This means that most of the new entries in
this filesystem are pairs of parent and child directories.  Although this
could be legitimate, it seems unusual.  It would be helpful to find out if
the applications used in this filesystem typically create such pairs of
parent and child directories or if some user is doing this sort of thing,
and how often they typically do so.

Over the three minutes that the script that collected qqr and qjmd output ran,
*none* of these requests got serviced.  And the qr that is assigned to service
them (pid 4833) did not change, so they are not continually being re-queued.
Also during this time, only two new requests were reported in the qrlog,
while there were many requests from a qr to a jmd in the jlog.  So this qr
4833 is probably trying to service these requests, but nothing is happening.
We need to find out why.  Unfortunately, this qr isn't logging anything.

First, let's find out if any requests have been processed since they ran
the qqr/qjmd script.  Ask them to do qqr again, then check if the requests
shown above are still there.  If they are, is process 4833 still servicing
them?  To tell, look at the line in the qqr output that says something like
this:

  Platter xxxxx (served by Child nnnnn).

Is there one of these that has nnnnn = 4833 in it?

If there is, let's find out if that process is actually running.  First do:

  ps lww -p 4833

to see if the process is still there, and what state it's in.  Then, if it's
there, wait a while (maybe 10 minutes) and do this again, to see if the
runtime has gone up.  (There's probably a better way to check if the process
is doing anything...)

Also worrying is the fact that the two new requests that came in were also
a pair of "Make Directory" requests, again for parent and child directories.
But their inodes were considerably higher than the ones shown in the qqr
output.  The amount of queue information qqr will display is limited -- for
these requests, about 500 can fit in qqr's text buffer.  Each qqr output
shows 509 of these Make Directory requests, so there may be more that are
not being displayed.  What this means is that there may be something that is
continually sending pairs of requests like these, and that *all* of them are
waiting in the queue.

(At least the requests haven't been stuck for a huge amount of time -- the
qqr output that was posted in 228.6 is dated Jan 10, and only shows two Make
Directory requests.  They, also, are a parent/child pair.)

We should check some of the directories that are in the list of requests,
to see if they are really in the filesystem.  Let's check the first pair
in the list (and their parent directory), and also the later requests in
the qrlog:

  find /nlcmpd1_1 \( -i 8025 -o -i 710587 -o -i 710588 -o -i 8026 -o \
  -i 713509 -o -i 713510 \) -ls

It's possible that there is something wrong with platter ff009a or
the drive it's in -- this is very unlikely, but possible.  Because I'm
going to recommend shutting down and restarting HSM, let's protect against
the possibility that this platter might get a read error when the
jmd next mounts the platter, causing the jmd to run fsck on it.  We *don't*
want fsck to do anything to the platter, so we should turn off the option,
JMD_FSCK, in jmcfg that says all fsck questions should be answered "yes".
We want them to be answered "no".  So, could you please ask the customer
to comment out the JMD_FSCK line in /usr/efs/jmcfg by putting a # in front
of it, like this:

  #JMD_FSCK        y

They can do this with an ordinary text editor.

(I have had more trouble due to platters getting "repaired" by fsck (because
they had a speck of dust on them or the optical drive was giving trouble)
than I've *ever* had from platters being unmountable because I didn't let
fsck "fix" them, so I now leave this this option turned off all the time.
Fsck can still reset the "dirty filesystem" flag even if this option is off.)

After changing this, can they schedule a time that they could umount the
filesystem and restart HSM?  Since they're not going to do anything to the
jukebox, and they're going to run the normal HSM shutdown procedure, they
can tell the jmd not to inventory the platters when it starts.  What they
should do is this:

  - Save a copy of /usr/efs/PolycenterHSM.startup.  Then edit this file and
    change the line "/usr/efs/bin/jmd" to read:

       /usr/efs/bin/jmd -i none

  - Umount the filesystem /nlcmpd1_1.

  - Shut down HSM with its shutdown script:

       /usr/efs/PolycenterHSM.shutdown

  - Wait for the qr and jmd processes to go away.

  - Start HSM again:

       /usr/efs/PolycenterHSM.startup

  - Mount the filesystem again.

  - Put back the original /usr/efs/PolycenterHSM.startup.  (Don't want to
    leave it with "no inventory", in case the system goes down without
    running HSM's shutdown procedure, or in case anything is done to the
    jukebox.)

  - Run the qqr and qjmd script again, to see if the Make Directory requests
    are now getting processed.

If the requests are still not being processed, I'll make a special version
of the qr with more messages.

-- Pat

228.11answers for 228.10NETRIX::&quot;[email protected]&quot;Wil BergmanThu Feb 13 1997 01:38245
>It turns out that JE_INVSTATE doesn't just mean that a platter had the
>wrong group or owner, etc. -- it can also mean that the qr asked for
>space on a particular platter side, and that side was full.  In the jlog
>were many lines like this:
>
>   Note (  833):  reservevs:  platter ff001a failed side criteria
>
>Side ff001a is full.  If the qr can't get the platter it "wants", it
>asks
>the jmd to pick the best platter to use.  Here, the jmd picked ff009a
>consistently.
>
>So, the JE_INVSTATE count, and the reservevs failure count are not a
>problem by themselves.  But...
>
>All those "Make Directory" requests that show up in the qqr output look
>like they *are* a problem.  There are hundreds of them, and they all
>follow
>the same pattern:  The requests are in pairs -- the second "Make
>Directory"
>of each pair is a child directory in the directory that appears in the
>first
>"Make Directory".  For instance:
>
>  Make Directory
>      <fs=101, platter=ff009a, child i/g=710587/32769, parent
>i/g=8025/32771>
>  Make Directory
>      <fs=101, platter=ff009a, child i/g=710588/32769, parent
>i/g=710587/32769>
>  Make Directory
>      <fs=101, platter=ff009a, child i/g=710590/32769, parent
>i/g=8025/32771>
>  Make Directory
>      <fs=101, platter=ff009a, child i/g=710591/32769, parent
>i/g=710590/32769>
>
>The inodes of these new directories are almost in numerical sequence --
>there
>are not many gaps between them.  This means that most of the new entries
>in
>this filesystem are pairs of parent and child directories.  Although
>this
>could be legitimate, it seems unusual.  It would be helpful to find out
>if
>the applications used in this filesystem typically create such pairs of
>parent and child directories or if some user is doing this sort of
>thing,
>and how often they typically do so.
>

There are no users on the system. This system is only used as an archiving
server. The HSM set (16 Gbytes) are mounted via NFS on a SUN Sparc station 
with SunOS. On the SUN a application called DOCUPLEX (Xerox) is running. This
application (based on oracle) scans drawings, and archives the via NFS on the
archiving server. We are contacting the supplier of DOCUPLEX to see if the
making the directories is a normal behaviour.

>
>Over the three minutes that the script that collected qqr and qjmd
>output ran,
>*none* of these requests got serviced.  And the qr that is assigned to
>service
>them (pid 4833) did not change, so they are not continually being
>re-queued.
>Also during this time, only two new requests were reported in the qrlog,
>while there were many requests from a qr to a jmd in the jlog.  So this
>qr
>4833 is probably trying to service these requests, but nothing is
>happening.
>We need to find out why.  Unfortunately, this qr isn't logging anything.
>
>First, let's find out if any requests have been processed since they ran
>the qqr/qjmd script.  Ask them to do qqr again, then check if the
>requests
>shown above are still there. 
>If they are, is process 4833 still
>servicing
>them?  To tell, look at the line in the qqr output that says something
>like
>this:
>
>  Platter xxxxx (served by Child nnnnn).
>
>Is there one of these that has nnnnn = 4833 in it?
>
>If there is, let's find out if that process is actually running.  First
>do:
>
>  ps lww -p 4833
>
>to see if the process is still there, and what state it's in.  Then, if
>it's
>there, wait a while (maybe 10 minutes) and do this again, to see if the
>runtime has gone up.  (There's probably a better way to check if the
>process
>is doing anything...)
>

We have executed qqr again, and all queued request were gone. The process with
pid 4833 was also not running anymore.

>
>Also worrying is the fact that the two new requests that came in were
>also
>a pair of "Make Directory" requests, again for parent and child
>directories.
>But their inodes were considerably higher than the ones shown in the qqr
>output.  The amount of queue information qqr will display is limited --
>for
>these requests, about 500 can fit in qqr's text buffer.  Each qqr output
>shows 509 of these Make Directory requests, so there may be more that
>are
>not being displayed.  What this means is that there may be something
>that is
>continually sending pairs of requests like these, and that *all* of them
>are
>waiting in the queue.
>
>(At least the requests haven't been stuck for a huge amount of time --
>the
>qqr output that was posted in 228.6 is dated Jan 10, and only shows two
>Make
>Directory requests.  They, also, are a parent/child pair.)
>
>We should check some of the directories that are in the list of
>requests,
>to see if they are really in the filesystem.  Let's check the first pair
>in the list (and their parent directory), and also the later requests in
>the qrlog:
>
>  find /nlcmpd1_1 \( -i 8025 -o -i 710587 -o -i 710588 -o -i 8026 -o \
>  -i 713509 -o -i 713510 \) -ls
>

The output of the find follows:

------------------------- Begin of output of find ----------------------------


    8025  136 drwxrwxrwx 3251 rdms     dba        139264 Feb  6 10:56 
/nlcmpd1_1/cab01/drawer10
  710587    8 drwxrwxrwx  3 4001     4000         8192 Feb  5 06:15
/nlcmpd1_1/cab01/drawer10/7322-014-14952-11001
  710588    8 drwxrwxrwx  2 4001     4000         8192 Feb  5 06:15
/nlcmpd1_1/cab01/drawer10/7322-014-14952-11001/01_REV1
    8026  160 drwxrwxrwx 3725 rdms     dba        163840 Feb  6 07:08
/nlcmpd1_1/cab01/drawer11
  713509    8 drwxrwxrwx  3 4001     4000         8192 Feb  5 09:37
/nlcmpd1_1/cab01/drawer11/7322-014-25891-11001
  713510    8 drwxrwxrwx  2 4001     4000         8192 Feb  5 09:37
/nlcmpd1_1/cab01/drawer11/7322-014-25891-11001/01_REV1

------------------------- End of output of find ----------------------------

>
>It's possible that there is something wrong with platter ff009a or
>the drive it's in -- this is very unlikely, but possible.  Because I'm
>going to recommend shutting down and restarting HSM, let's protect
>against
>the possibility that this platter might get a read error when the
>jmd next mounts the platter, causing the jmd to run fsck on it.  We
>*don't*
>want fsck to do anything to the platter, so we should turn off the
>option,
>JMD_FSCK, in jmcfg that says all fsck questions should be answered
>"yes".
>We want them to be answered "no".  So, could you please ask the customer
>to comment out the JMD_FSCK line in /usr/efs/jmcfg by putting a # in
>front
>of it, like this:
>
>  #JMD_FSCK        y
>

JMD_FSCK has been commented out.

>
>They can do this with an ordinary text editor.
>
>(I have had more trouble due to platters getting "repaired" by fsck
>(because
>they had a speck of dust on them or the optical drive was giving
>trouble)
>than I've *ever* had from platters being unmountable because I didn't
>let
>fsck "fix" them, so I now leave this this option turned off all the
>time.
>Fsck can still reset the "dirty filesystem" flag even if this option is
>off.)
>
>After changing this, can they schedule a time that they could umount the
>filesystem and restart HSM?  Since they're not going to do anything to
>the
>jukebox, and they're going to run the normal HSM shutdown procedure,
>they
>can tell the jmd not to inventory the platters when it starts.  What
>they
>should do is this:
>
>  - Save a copy of /usr/efs/PolycenterHSM.startup.  Then edit this file
>and
>    change the line "/usr/efs/bin/jmd" to read:
>
>       /usr/efs/bin/jmd -i none
>
>  - Umount the filesystem /nlcmpd1_1.
>
>  - Shut down HSM with its shutdown script:
>
>       /usr/efs/PolycenterHSM.shutdown
>
>  - Wait for the qr and jmd processes to go away.
>
>  - Start HSM again:
>
>       /usr/efs/PolycenterHSM.startup
>
>  - Mount the filesystem again.
>
>  - Put back the original /usr/efs/PolycenterHSM.startup.  (Don't want
>to
>    leave it with "no inventory", in case the system goes down without
>    running HSM's shutdown procedure, or in case anything is done to the
>    jukebox.)
>
>  - Run the qqr and qjmd script again, to see if the Make Directory
>requests
>    are now getting processed.
>
>If the requests are still not being processed, I'll make a special
>version
>of the qr with more messages.
>

We are scheduling a time for tomorrow to executed the shutdown/startup of HSM.

>-- 
At this moment 3 out of 4'disks' are full and the 4th disk is at appr. 50%. 
I do not know how long we can continue this situation, and I am getting 
very nervous. We are now already trying to get this work for more then three 
weeks.

[Posted by WWW Notes gateway]
228.12Looks like the requests were real; We can find out where the requests come from.DECWET::TRESSELPat TresselThu Feb 13 1997 12:09134
Wil --

> There are no users on the system.

I didn't mean to imply "users directly logged on to the HSM server".
I meant anyone or any application that uses the filesystem, whether
remotely or locally.  When a user or application creates files or
makes directories in an NFS-mounted filesystem, those same operations
are propagated back to the filesystem on the server.

> We have executed qqr again, and all queued request were gone. The
> process with pid 4833 was also not running anymore.

Good!!!  That means HSM is not stuck for some unknown reason.

The question remains:  Is there still something out there generating
those mkdir requests?  When we captured qrlog output for three minutes
with LOG_LEVEL set to 6 in efscfg, we saw one pair of mkdir requests.
Unfortunately this does not tell us how often they were arriving.
Why don't they try setting their LOG_LEVEL to 6 in efscfg for about 10
minutes, and see any Q_MKDIR requests show up?  Another indication
would be if they're still seeing JE_INVSTATE going up.

> The HSM set (16 Gbytes) are mounted via NFS on a SUN Sparc station 
> with SunOS.
> ...
> We are contacting the supplier of DOCUPLEX to see if the
> making the directories is a normal behaviour.

>> We should check some of the directories that are in the list of
>> requests, to see if they are really in the filesystem.

> The output of the find follows:

So the first and the latest requests *did* make real directories.
This strongly implies that the requests were legitmate, and were
part of the operation of the application.  It might still be useful
to know what actions in the application can cause this creation of
huge numbers of directories.  Remember, though, that the users will
not be slowed down because HSM might happen to take a while to
finish recording all the new directories on optical -- the requests
to the AdvFS (magnetic) filesystem have already been serviced, so
the directories are all available.

At first, I thought there might be an NFS problem, i.e. some client
might have been continually retrying an operation.  With the
(relatively) new NFS 3 protocol, clients retry *much* faster than they
did under NFS 2.  If we think that fast retries are a problem (and
that may be what got HSM stuck -- contention between HSM and AdvFS,
perhaps, that took a while to clear up -- this is pure speculation...)
then we could consider having the clients use NFS 2 instead.  The only
drawback is that the filesize is then limited to 1Gb.

If there are still requests coming in, and particularly if there are
requests coming in *quickly*, then we may be able to identify where
the requests are coming from.

NFS can be examined from both directions -- from the client side and
from the server side.  The goal would be to identify whether an NFS
client, or an application running on a client, is sending the requests,
OR, if something local on the server (e.g. within the filesystem) is
generating the requests.  (The latter is somewhat unlikely.)

From the client side, the customer should:

  - Identify all the client machines that mount the filesystem.  For
    instance, are they listed in /etc/exports on the server?

  - See if there are any processes that are doing NFS requests
    to the HSM filesystem repeatedly.  I'm not familiar with SunOS,
    but I believe the NFS monitoring commands are similar to those
    used under Digital Unix.  In particular, they should look for
    the "nfswatch" command.

On the server:

  - If their kernel was built with the packetfilter option, then they
    can use nfswatch.  To check if they have the packetfilter option,
    they should do:

      fgrep -i packetfilter /sys/conf/HOSTNAME

    where HOSTNAME should be replaced by the name of the system in
    uppercase.

  - If they do *not* have the packetfilter option, they can add a line
    to their /sys/conf/HOSTNAME file that says:

      options PACKETFILTER

    then rebuild their kernel by doing:

      doconfig -c HOSTNAME

    (where HOSTNAME is as above) and answering no when it asks if they
    want to edit the configuration file (or they can add the line at
    that point, but they should be sure their EDITOR environment variable
    is set to some editor they like...).  After the kernel is built, they
    should save their current /vmunix somewhere (if / is getting full,
    they should copy it somewhere else), then move the new kernel from
    /sys/HOSTNAME to /, e.g.:

      mv /vmunix /vmunix.prePACKETFILTER
      mv /sys/HOSTNAME/vmunix /vmunix

    After the new vmunix is in place, they need to shut down HSM (and
    whatever other applications need to be shut down), then reboot.

  - When they are running a kernel with the packetfilter option, they need
    to make packetfilter devices by:

      cd /dev
      ./MAKEDEV pfilt

  - Finally, they need to allow their network interface to be set into
    "copyall" mode:

      pfconfig +copyall ln0

    where they should substitute the name of their network interface if
    it's not ln0.

    (For more information, see "man packetfilter" and "man pfconfig".
    Note, however, that the above was discovered partly by
    experimentation.)

  - Once they have the packetfilter option and devices, they can run
    nfswatch:

      nfswatch -clients

    This should show the names of clients that are sending NFS requests.

-- Pat
228.13DECWET::TRESSELPat TresselThu Feb 13 1997 13:179
> We are scheduling a time for tomorrow to executed the shutdown/startup
> of HSM.

That was only useful if the requests had not been serviced, and that same
qr process was still waiting.  Since they *have* been serviced, it's no
longer useful.

-- Pat
228.14DECWET::TRESSELPat TresselThu Feb 13 1997 14:2070
Wil --

I didn't get the impression that this was a new HSM site, but perhaps I'd
better ask:  How long have they had HSM?  Do they have people who are
experienced HSM administrators?

If they have had HSM running for some while, then we don't have to worry
that their jukebox configuration is wrong.

> At this moment 3 out of 4 'disks' are full and the 4th disk is at appr. 50%

What is their high-water-mark set to?  (qqr will report it, or it is in
their /usr/efs/efscfg file as AGER_HWM.)  Using the above numbers, their
magnetic filesystem is 7/8 full, i.e. about 88%.  If their high-water-mark
is set to 90%, then 88% full is legal.  Since they report free space on
each disk individually, they probably used the showfdmn command to get the
information.  If they didn't, could you ask them to run showfdmn on the
domain that their HSM filesystem is in?  If they would like to have more
space free on magnetic, then they can change the AGER_HWM.  But they
should only do this after they've run bulk ager (see below), and they
should only change it by a little at a time -- maybe 1% -- and wait for
the requests to clear out of the HSM queues, otherwise HSM may be too busy
to service other requests quickly.

Please warn them:  They should *NOT* use the AdvFS features that balance
their filesystem!!!  This does *NOT* work with an HSM filesystem.  Nor
should they remove a volume from their filesystem!!!  It is ok to add a
volume.

Do they run the bulk ager every night?  If they do, then during the day,
when space is needed, HSM won't have to do much work to make space
available.  We recommend that they run the bulk ager, with AGER_BLWM = 0
and AGER_MINAGE = 0, not just for the above reason, but because it makes
sure all files are written out to optical.  Then if something happens to
the magnetic disks, the filesystem can be rebuilt, and will only be
missing files created or modified after the last bulk ager run.

Some sites like to set AGER_MINAGE to something larger during the day, so
that the active ager does not shelve new files.  (A value AGER_MINAGE = 72,
for instance, would tell the active ager not to shelve files unless they
were at least three days old.)  They make a script that changes AGER_MINAGE
to 0 before starting the bulk ager, and changes it back afterward.  I can
get a copy of such a script if they'd like it.  For now, if they're not
already running the bulk ager with AGER_MINAGE = 0, they should not worry
about setting AGER_MINAGE back to some non-zero value -- just set it to 0
and leave it (and make sure AGER_BLWM = 0), and *do* run the bulk ager.

> And how about documentation. I can't find anything
> about the statistics displayed with the 'qjmd -s' query

This is in the man page for qjmd:

  -s   Prints statistics, such as numbers of each type of request queued and
       serviced, success and failure rates, latency and execution time for
       requests, and the number of times each error code has been returned.

These are mostly useful if they need to get help from us.  HSM sends e-mail
to root for most problems that need attention from the local administrator.
It could be useful to ask, as they did, if something unusual appears in the
qjmd output.  But mostly these numbers are of use to us, in diagnosing
problems, rather than to them.

> We are now already
> trying to get this work for more then three weeks????

Could you ask them what they mean?  What is it that they were trying to do
for three weeks?  Is there something that's not working, that they haven't
reported yet?

-- Pat
228.16Reply to the customerDECWET::TRESSELPat TresselFri Feb 14 1997 14:34112
Wil --

The following is directed to the customer:

--------------------------------------------------------------------------

Hans --

I'm going to answer the questions a little out of order:

> As one can see the /mo0 - /mo2 are full. disk /mo3 is getting full. I am 
> concerned about this. This is because I do not know whether or not the
> system waits until all disks are full, before it moves the disks back to
> their location whithin the jukebox.

> So our second question is: When are the disks brought back to the jukebox

HSM usually only writes to one platter at a time, and only quits writing
to that platter when it's full.  (There's an exception, which I'll talk
about below, and which is what causes a JE_INVSTATE to be counted.)  This
keeps files created or changed at about the same time together.

Full platters stay full unless the user deletes or modifies a file that's
shelved to the platter.  If the HSM filesystem is being used for *archiving*,
meaning that no files are ever deleted, then once a platter fills up, it
will stay full.

HSM will not dismount platters until it needs to use a drive for some other
platter.  HSM has to mount any platter that a user needs to *read* from, not
just the platter that it is writing to.  HSM tries to keep "related" files
together.  So, if a user needed to read a file that was on a particular
platter, it is most likely that other files they need will be on the same
platter.  So HSM does not dismount platters, expecting that it would just
have to mount the same platter again.

So, the typical pattern of platter usage, in a filesystem where few files
are deleted or changed, is that some platters will be entirely full, one
will be partly full, and other will be entirely empty.

(As a contrast, in an *active* filesystem, that is, one in which users
continually change or delete files, there may be several partly-full
platters.   HSM will only go back and "fill in the gaps" when the platter
it's currently using for writing fills up.)

In any case, having full platters mounted is entirely normal:  HSM only
writes to one platter, but could be reading from any of them.

Does that help?

> So our first question is: What does the JE_INVSTATE mean

Every time HSM gets a request to write to a platter, it first checks whether
there is space on the particular platter that has files related to the one
that's now being written.  If this platter turns out to be full (as it is
here, where the parent directories of all the new directories are likely to
be on full platters), then HSM logs a JE_INVSTATE.  That is, HSM wanted to
use a particular platter, but it was full -- it was in the wrong "state" to
be used for writing.  If HSM can't use the particular platter it wants, it
goes through its list of platters and picks the best one.  This always
turns out to be the same platter, until that platter fills up.

In the log examples I saw, HSM first chose platter side ff001a.  (This
probably was an early platter in the filesystem, and has the parent
directories of lots of the new directories that are being created by the
application.)  But ff001a is full, so HSM logs JE_INVSTATE and says:

     Note (  833):  reservevs:  platter ff001a failed side criteria

It then goes through the entire platter list, and finds ff009a, which is
partly full, so it uses that one.

So these JE_INVSTATEs are happening because the platter HSM would like to
record the new directories on is frequently already full.  It doesn't
appear that there is any problem.

> what can we do to prevent it.

Nothing.  (The counter would be reset if HSM were restarted, but it's not
worth waiting for the jmd to inventory platters just to get the count
cleared.  You could turn off the inventory by changing the startup file,
restart HSM, and turn the inventory back on if you wanted.  This is in
section 6.8 of the Admin Guide.)

> Is there a relation between the 4 MO-disks and the JE_INVSTATE count.

Only very indirectly:  It is because this is filesystem does not have
deletions and changes, which would make space available on platters that
were entered into the filesystem earlier.  If HSM found space on the
platter it wanted to use, then there would be no JE_INVSTATE.  The fact
that there are no deletions or changes is also the reason there are full
platters, empty platters, and one partly-filled platter.

> Since we 'normally' are triggered by exceptions, we are concerned about
> the high error count of Errno 101 (183737), see list above. this count is
> increasing every minutes/hour/day....

It's always good to notice anomalies, so do keep watching!!

The *real* anomaly was that there were requests that stayed in the queue
for a long time.  So rather than looking at qjmd, it is more useful to
look at qqr, which shows requests, and make sure the same requests are not
staying in the queue. Also, watch for e-mail to root from HSM; make sure
the bulk ager completes every night; watch for ERROR and FATAL in the
logs; and do backup regularly.

Running the bulk ager is the single most important thing to do.  Backup
to media that will be stored outside the jukebox should be done as well,
to guard against possible damage to the jukebox.  It may be possible,
because this is an archive filesystem, to do backup somewhat differently
than for an active filesystem.  What kind of backup are you doing now?

-- Pat
228.15Questions from the customerDECWET::TRESSELPat TresselFri Feb 14 1997 14:38190
Date:	14-FEB-1997 03:36:34.09
From:	AXPCOM::"[email protected]"
Subj:	Origin IT
To:	decwet::tressel

Pat,
	customer is a bit lost and very worried about the fact that
	3 platters are 100% full and the 4th (and last) is filling up
	rapidly.
	The JE_INVSTATE was of secondary interest. Although he expected
	it was somewhere releated.
	So his real question now is "WHEN ARE THE FULL PLATTERS REPLACED
	BY NEW ONES ??"
	
	Following is the mail I received from the customer. Could you 
	please answer his questions in such a way that he gets more
	confident in that it is all ok what he is experiencing ?

Kind Regards, Wil


Hello wil,

It seems were out of sync. Therefor lets start all over again.

Yes, we are inexperienced HSM administrators. 
Yes, It is a new site.
but NO, we have tested HSM for appr. 1 year before we went into
production.

How does our configuration look like:

We have 1 SUN Sparc station. On this machine a application called
Docuplex is
running, making use of an Oracle database. 

We have also 1 Digital Alpha 1000A computer. On this machine the HSM
application is running. This machine acts as an archiving server for the
Docuplex machine.

On the archiving server we have define a HSM volume set (of four disks
(appr. 4
Gb each) of about 16 GB). This volume set is mounted on the Docuplex
machine
via NFS. Connected to the archiving server is a jukebox of 144 platters.
Currently filled with 36 platters. Within the jukebox four MO-drives are
located
to be used to read and write on to the platters. 

Following is the configuration of the files system of the archiving
server:


Filesystem        512-blocks        Used       Avail Capacity  Mounted
on
root_domain#root      196608      131668       50592    72%    /
usr_domain#usr        819200      660906      134848    83%    /usr
var_domain#var        510752       66832      422896    14%    /var
var_domain#nsr        510752        6458      422896     2%    /nsr
hsm#set1            33520320    27216352     4732784    85%   
/nlcmpd1_1
/dev/rz11c           1123314     1100806           0   109%    /mo2
/dev/rz10c           1123314     1100812           0   109%    /mo1
/dev/rz9c            1123314     1100814           0   109%    /mo0
/dev/rz12c           1123314      431458      579524    43%    /mo3

As one can see the four MO-drives of the jukebox are mounted as /mo0 -
/mo3.
The HSM volume set is mounted as /nlcmpd1_1.

What is our problem.

1) If we are executing the command: qjmd -s, which we think we need
(among
other commands/programs) to proactive locate possible problems on the
system 
we get the following output:

nlcmpd1 $ qjmd -s
Odometer: 2032116  (121973 forked)  0 currently forked
Total timeouts: 12  Timeout entries in queue: 0
Box inventories pending: 0
Box cleans pending: 0

Function    count  timed sched  fail    ok  latency     time avgltcy
avgtime
--------    -----  ----- ----- ----- ----- -------- -------- -------
-------
mount       60981     12 60968     1 60968      480     8202      
0       0
umount      60968      0 60968     0 60968      432     1586      
0       0
otherside   59684      0 59684     0 59684      147        7      
0       0
getmount    90303      0 90303     0 90303      159       14      
0       0
statplat    568110     0 568110     0 568110     1189       26      
0       0
getplat       814      0   814    22   792        3        1      
0       0
statboxid       6      0     6     0     6        0        0      
0       0
pid             8      0     8     0     8        0        0      
0       0
dump           58      0    58     0    57        0        2      
0       0
findbest    254717     0 254717     6 254711      562      719      
0       0
modspace     8404      0  8404     0  8404       17       15      
0       0
getstr          7      0     7     0     7        0        0      
0       0
reservevs   197573     0 197573 183737 13836      578       20      
0       0
signal          4      0     4     0     4        0        0      
0       0
usedvspace  242535     0 242535     0 242535      696       79      
0       0
upplatter   121997     0 121997     0 121997      867      179      
0       0
updrive     121976     0 121976     0 121976     1040        7      
0       0
upbox           1      0     1     0     1        0        2      
0       2
complete    121973     0 121973     0 121973     1005      161      
0       0
killed      121973     0 121973     0 121973  4121971        7     
33       0
invbox          1      0     1     0     1        0       81      
0      81
invslot        36      0    36     0    36     6776     1672    
188      46
cleanbox        1      0     1     0     1      439        0    
439       0
Forked      121986    12 121973     1 121973     7688    11541      
0       0
Total       2032130   12 2032117 183766 1848351  4136361    12780      
2    
0

Errno  count  errtext
   64     28  JE_NOPLATTER
   70      1  JE_BUSY
   72     12  JE_TIMEOUT
  101  183737  JE_INVSTATE

                                
Since we 'normally' are triggered by exeptions, we are concerned about
the high
error count of Errno 101 (183737), see list above. this count is
increasing
every minutes/hour/day....

---------------------------------- question 1
---------------------------------

So our first question is: What does the JE_INVSTATE mean and what can we
do to
prevent it.

---------------------------------- question 2
---------------------------------


As one can see the /mo0 - /mo2 are full. disk /mo3 is getting full. I am 
concerned about this. This is because I do not know whether or not the
system
waits until all disks are full, before it moves the disks back to their
location whithin the jukebox. 


So our second question is: When are the disks brought back to the
jukebox


---------------------------------- question 3
---------------------------------

And our Third question is: Is there a relation between the 4 MO-disks
and the
JE_INVSTATE count. 
                                            

I hope I gave enough information about our problems/concerns and looking
forward to hearing from you.

Greetings,
Hans Kappert.


228.17Replacing full plattersNETRIX::&quot;[email protected]&quot;WIl BergmanMon Feb 17 1997 00:2717
Pat,
	I forwarded your answers to the customer, but
	could you enlighten me with an answer to the following questions, 
as I did not fully understand your answers to the customer.

You mentioned that it is normal in this situation to have multiple "full"
platters
and 1 nearly full. I understand that.

My questions are : 1)	What happens when the last platter gets full ?
		   	Will a new platter automatically pulled in and be used or 
		   	does the customer has to do something
		   2)	where are the empty platters are located (jukebox ?)
			and how does HSM determine that.

Kind Regards, Wil
[Posted by WWW Notes gateway]
228.18What does HSM think is an empty platter?DECWET::TRESSELPat TresselMon Feb 17 1997 11:0671
Wil --

> where are the empty platters are located (jukebox ?)

Some empty platters should be kept in the jukebox, so that HSM can get them
if it needs them.

> and how does HSM determine that.

I'm assuming that what you want to know is:  How does HSM know that it
can use some platter for shelving?

If a platter is already in use for shelving for a particular filesystem,
then HSM will use it, as long as it has space for the particular thing
HSM wants to write.

If HSM has to use a platter that it hasn't previously used for shelving,
then HSM can use it if

  1) it is owned by HSM (i.e. owner is 5)
  2) is internal
  3) is either in group 0 or has its group set to the fsid of the
     filesystem that HSM is trying to add a platter to
  4) is formatted
  5) has an empty filesystem on it (which is what it will have after
     formatm -m).

Note that HSM assigns both sides of a platter to the filesystem at the
same time -- one side can't belong to one filesystem, and the other side
to a different one.

According to the listm -f output in .2, there are a number of completely
empty platters -- these are the ones that have free space = 70207.  (That's
not what HSM is looking at -- it's just the amount of free space a double-
density platter with an empty filesystem has.)  There are a few platters
that appear to be *almost* empty.  I can't tell from the listm output
whether these were added to the filesystem by HSM, and just haven't been
used much yet, or whether they were platters that weren't really empty, but
someone added them to the filesystem with the chgrpm command.  I'd have to
see what's on them to tell.  These are platters ff004a/b, ff005a/b, and
ff006a/b.  If you or the customer are concerned about this, the customer
could mountm each of these platter sides, do ls -alR, and unmountm them.
(The unmountm is very important, else HSM will leave the platter in mounted,
thinking it's still being used.)

> What happens when the last platter gets full ?

If there are no more platters available for shelving in the jukebox, and
HSM needs more space, it will send a message using the command in the
LOG_MAIL parameter in /usr/efs/efscfg.  By default, this sends e-mail to
root.

> Will a new platter automatically pulled in and be used or

If there is a platter that matches the above criteria for an "empty"
platter, then HSM will assign it to the filesystem and begin using it.

But it won't change the state or owner of any platter, and it won't
format any platter by itself, or make a new filesystem on a platter side.

> does the customer have to do something

When a new platter is imported into the jukebox, both sides should be
formatted (if it wasn't formatted at the factory), have filesystems made
on them, and names should be assigned.  The platter's group does *not*
need to be set to a filesystem's fsid -- it can be left as 0.  If the
platter is intended to be used by HSM for shelving (instead of, for
instance, doing backup) then HSM should be made the owner of the platter
(by setting the owner = 5).  The platter's state should be set to internal.

-- Pat
228.19Customer SatisfactionNETRIX::&quot;[email protected]&quot;Wil BergmanTue Feb 18 1997 00:0215
Pat, 
	the last mail we received answered the questions I had and the questions the
customer had. 
We now know that the reservevs and JE_INVSTATE are counts we do not have to 
worry about. It is also clear when a new platters will be mounted. 

> What kind of backup are you doing now?

We make use of the NSR(Legato Networker) software to backup the ADvFS 
filesystems.

We thank you for you're effort in solving this "problem".

Kind Regards, Wil
[Posted by WWW Notes gateway]
228.20DECWET::TRESSELPat TresselThu Feb 20 1997 00:2256
Wil --

> We now know that the reservevs and JE_INVSTATE are counts we do not have to 
> worry about.

It's good that they're noticing that sort of thing -- I don't want to
discourage them from doing it.  This is just an unusual case -- the platter
that is the qr's first choice for putting new directories on always happens
to be a full one.  It surprised me too.

One thing they can do if they want to check their filesystem occasionally,
especially if the system ever crashes, is to schedule a time when they can
mount the filesystem read-only, and run ssgfsck (with no options except
maybe -p or -v).  This will tell them if there is any mismatch between the
information on magnetic and on optical, due to requests to HSM that might
have been lost in a crash.  (HSM records on optical things like the owner
and mode of files, and what directory they're in.  AdvFS notifies HSM
whenever these file attributes change, and it's these notifications that
might get lost.  HSM doesn't need this information during normal operation,
but it would be needed if the filesystem ever had to be rebuilt from the
optical platters alone, if something went wrong with the magnetic disks.
So it's good to make sure this information is still current, if there's a
chance requests might have been lost.  If ssgfsck does find any mismatches,
they can ask us what to do at that time.)

> We make use of the NSR(Legato Networker) software to backup the ADvFS 
> filesystems.

Good!  I was going to mention a trick used by one of our customers who
also uses NSR.  Because their filesystem is so big, and they don't want
everything moved back onto the magnetic disk every time they do a full
backup, they did *only one* full backup, and from then on, did only
incremental backups.  Because they do incremental backups every night,
the new and changed files were always still on magnetic, so the backup
never caused HSM to copy files back onto magnetic.

There is an NSR option called Saveset Consolidation that allows an older
full backup to be combined with the incrementals that came after it to
make a new saveset that is just like a full backup at a later time.  If
one is doing only incremental backups, then Saveset Consolidation is useful
because NSR wouldn't have to keep as many tapes to have a full set of
files, and it wouldn't have to keep indices of all the incremental tapes
back to the last full backup.  Saveset Consolidation requires a tape
jukebox with at least two drives, because it has to write a new tape
while it's reading from old ones.

Sometimes HSM customers wonder why they need to do backup if they make
sure their files are always copied to optical by running the bulk ager.
In case this customer might wonder about that, it's because having the
files on optical only provides the *current* files -- anything that's
been deleted or changed is *gone*.  With a backup, on the other hand,
one can get back old version of files, or deleted files.  It's been my
experience that this (getting back old files) is a *much* more common
use of backup than reconstructing whole filesystems.

-- Pat