T.R | Title | User | Personal Name | Date | Lines |
---|
254.1 | | BSS::JILSON | WFH in the Chemung River Valley | Wed Feb 26 1997 16:17 | 6 |
| Why not farm out mounting of disks to a batch job or
RUN/DETACH SYS$SYSTEM:LOGINOUT.EXE/INPUT=proc.com? IMHO the only disks
that should be mounted in SYSTARTUP_VMS.COM or a procedure called from it
are those needed in STARTUP.
Jilly
|
254.2 | See MSCPMOUNT.COM | XDELTA::HOFFMAN | Steve, OpenVMS Engineering | Thu Feb 27 1997 09:38 | 8 |
|
Customize SYS$EXAMPLES:MSCPMOUNT.COM to manage the disk farm.
Consider disabling the rebuild of the system disk (via the SYSGEN
parameter ACP_REBDSYS), and the data disks (via MOUNT/NOREBUILD),
and using MSCPMOUNT or another procedure to perform the rebuild
at a later, more appropriate, time...
|
254.3 | | VMSSG::FRIEDRICHS | Ask me about Young Eagles | Thu Feb 27 1997 09:43 | 12 |
| re SPAWN/NOWAIT MOUNTS...
If you are not using the "new" MOUNT (ie V7.1 or V6.2 with the
COMPAT_062 or CLUSIO01_062 kits) there are synchronization problems in
MOUNT that will cause some of the MOUNTs to fail.
SPAWN/NOWAIT was heavily used in batch procedures during the testing of
the new MOUNT and no such problems were seen.
Enjoy!
jeff
|
254.4 | | MILORD::BISHOP | The punishment that brought us peace was upon Him | Thu Feb 27 1997 09:44 | 10 |
| .0 says he already knows how to delay the pain of the rebuilds, so
that's not his problem. I think the idea of spawned processes (or batch
processes or whatever) to do the mounts is as good as ever.
.1: I disagree. If you have batch jobs waiting to start or restart
(for whatever reason) when the system reboots, all the disks must be
mounted before you can start the queues. So all the disks in general
use must be mounted before startup can complete.
- Richard.
|
254.5 | mounting disks.... | STAR::CROLL | | Thu Feb 27 1997 09:53 | 15 |
| You're in luck! Andy Goldstein, a.k.a. Mr. Mount, walked past my office just as
I was reading the base note, so I asked him about it. Here are his suggestions:
1) upgrade to V7.1, or get the Redhawk kit (if you're running V6.2). There have
been performance improvements in mount that make it more efficient, thereby
reducing the time it takes to mount a single device. (The new version also
eliminates the synchronization issues mentioned in a previous reply.)
2) investigate SYS$EXAMPLES:DISKMOUNT.C. The big problem with the SPAWN/NOWAIT
or the batch job is that you pay the price of image activating VMOUNT.EXE each
time. The VIOC helps but this is still a significant amount of time in each
mount. DISKMOUNT.C is a program that calls the $MOUNT system service, enabling
you to mount all your drives with a single image activation.
John
|
254.6 | Details On Batch Startup... | XDELTA::HOFFMAN | Steve, OpenVMS Engineering | Thu Feb 27 1997 10:39 | 12 |
| : .1: I disagree. If you have batch jobs waiting to start or restart
: (for whatever reason) when the system reboots, all the disks must be
: mounted before you can start the queues. So all the disks in general
: use must be mounted before startup can complete.
OpenVMS mounts the system disk during the bootstrap, and I mount the
pagefile disk and the disk with the `relocatable' system files -- the
sysuaf, queue database, etc. -- in SYLOGICALS, and -- once I've got
these few disks online -- batch jobs are not a problem... And I have
a `maintenance' queue targeted for system tasks... (I don't autostart
the non-system queues...)
|
254.7 | Try Diskmount... | STAR::JFRAZIER | What color is a chameleon on a mirror? | Thu Feb 27 1997 12:51 | 10 |
| I just now tried Diskmount, (from the VAX sys$examples: )
and I mounted 41 disks, already mounted by other cluster members,
in 11 *seconds*.
ymmv...
James :-)
|
254.8 | | VMSSG::FRIEDRICHS | Ask me about Young Eagles | Thu Feb 27 1997 13:45 | 5 |
| And James was running the new MOUNT...
Cheers,
jeff
|
254.9 | | AUSS::GARSON | DECcharity Program Office | Thu Feb 27 1997 17:29 | 19 |
| re .4,.6
Put another way...if you defer mounting data disks to a batch job that
executes after system startup termination then you will also have to
defer starting some batch queues (be they automatically or manually
started) to the same batch job or another batch job that is
synchronised with it.
To make this work you need to have clean division of batch queues i.e.
queues with specific purposes and know what queues need what disks. For
a large data centre (and it isn't a problem when you only have to mount
a few disks) this level of control and knowledge would be expected.
If you are really keen then you could use /CHARACTERISTICS to ensure
that the queue system itself doesn't allow jobs to execute until the
needed disks are mounted (with some effort).
Perhaps DECscheduler has functionality to handle the situation. Do we
still own that?
|