[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference cookie::sls

Title:Storage Library System
Moderator:COOKIE::REUTER
Created:Sun Oct 13 1991
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2270
Total number of notes:7850

2215.0. "Mixed architecture cluster installation techniques" by SANITY::LEMONS (And we thank you for your support.) Mon Mar 10 1997 12:27

    Hi
    
    We need to run SLS in a cluster that has three system disks, each
    running a different operating system version.  Rather than have three
    copies of TAPESTART.COM, we'd like to have just one copy, and have all
    cluster systems look at it.  What is the best way to make this work? 
    We're leaning toward creating a cluster-wide logical name for
    TAPESTART.COM, and pointing this to a single copy that all share.
    
    Late breaking news:  we just discovered that installing SLS T2.9 onto a
    VAX system, and then following that installation with an Alpha system
    installation to the same SLS default area produced this problem:
    
    %VMSINSTAL-I-MOVEFILES, Files will now be moved to their target
    directories...
    %SLS-I-STARTUP, Starting SLS
    %DCL-W-ACTIMAGE, error activating image SLS$ROOT:[SYSTEM]SLS$SETUSR
    -CLI-E-IMGNAME, image file DSA1:[SLS$FILES.][SYSTEM]SLS$SETUSR.EXE;3
    -IMGACT-F-NOTVAXIMG, image is not an OpenVMS VAX image
    %DCL-W-ACTIMAGE, error activating image SLS$ROOT:[SYSTEM]SLS$SETUSR
    -CLI-E-IMGNAME, image file DSA1:[SLS$FILES.][SYSTEM]SLS$SETUSR.EXE;3
    -IMGACT-F-NOTVAXIMG, image is not an OpenVMS VAX image
    %VMSINSTAL-F-UNEXPECTED, Installation terminated due to unexpected
    event.
    
            VMSINSTAL procedure done at 14:23
    
    So, we want to sure the media and file database is shared, but the
    executable code is not shared.  What's the best way to make this
    happen?
    
    Thanks!
    tl
T.RTitleUserPersonal
Name
DateLines
2215.1SANITY::LEMONSAnd we thank you for your support.Wed Mar 12 1997 10:465
    Please let me know your thoughts on this.  We are delaying our SLS T2.9
    testing, waiting for a response.
    
    Thanks
    tl
2215.2SANITY::LEMONSAnd we thank you for your support.Mon Mar 17 1997 07:334
    Is the lack of response a tacit statement that SLS can't be installed
    on a mixed architecture cluster?
    
    tl
2215.3CX3PST::WSC217::SWANKDavidWed Mar 19 1997 11:0452
>    We need to run SLS in a cluster that has three system disks, each
>    running a different operating system version.  Rather than have three
>    copies of TAPESTART.COM, we'd like to have just one copy, and have all
>    cluster systems look at it.  What is the best way to make this work?

Install SLS on each cluster member who has its own system disk.  If your going
to use the same disk for SLS for both the VAX and AXP installations then when
asked to "Enter the name of the directory the SLS software will use" enter
SLS$FILES_VAX for the VAX systems and SLS$FILES_AXP for the ALpha systems.
When asked "Where to place..." summary, temp. history, maintenance log and
backup log files, replace the default responces preceeding "SLS$ROOT:[" with
(using $1$DUA1 as the SLS disk) "$1$DUA1:[SLS$FILES_VAX." for the VAX system(s)
and "$1$DUA1:[SLS$FILES_AXP." for the Alpha system(s).  For example
the summary files would be "$1$DUA1:[SLS$FILES_VAX.SYSBAK.SUMMARY_FILES]" on
a VAX.  If the cluster is using a comman authorization file then after all SLS
installs are complete modify the SLS account's device to SLS$ROOT: and
directory to [000000].  This same modification would have to be done as
a post upgrade task if done so on any cluster member in the future.

>    We're leaning toward creating a cluster-wide logical name for
>    TAPESTART.COM, and pointing this to a single copy that all share.

A logical for TAPESTART.COM would not work because of how it is referenced
in SLS's startup.  However, you can specify an alternate TAPESTART.COM in
SYS$STARTUP:SLS$STARTUP.COM by adding the path to the alternate TAPESTART as
an argument to the last line in the procedure as the following example shows;

    $ @SLS$ROOT:[SYSTEM]REBOOT.COM $1$DUA0:[SYS0.SYSCOMMON.SYSMGR]TAPESTART.COM

>    So, we want to sure the media and file database is shared, but the
>    executable code is not shared.  What's the best way to make this
>    happen?

To share the SLS volume/magazine/slot databases between multiple cluster
members acting as servers (i.e. TAPESTART's PRI = cluster alias and DB_NODES =
server node names) then edit TAPESTART.COM and change PRIMAST to be the
path to the system's SLS disk & directory who will hold them.  The following
example is for the Alpha on a common SLS disk (per above example);

	$ PRIMAST := $1$DUA1:[SLS$FILES_AXP.PRIMAST]

If there is only one server then this is not necessary.  The one server's
SLS$ROOT:[PRIMAST] will do.

To share the file history databases then modify TAPESTART's HISDIR_<n>
assigment's preceeding "SLS$ROOT:[" with the disk and directory of the system
whom you want to hold them "$1$DUA1:[SLS$FILES_VAX.".  For example for the VAX
in the above example you would use $1$DUA1:[SLS$FILES_VAX.HIST] for the GENERIC
history set and $1$DUA1:[SLS$FILES_VAX.HIST.GENERIC_RM] for the
GENERIC_RMU_HISTORY history set.

\Good luck, David
2215.4SANITY::LEMONSAnd we thank you for your support.Wed Mar 19 1997 19:425
    Hi David
    
    Excellent instructions.  Thanks for taking the time to write this!
    
    tl
2215.5more info..COOKIE::MCCLELLANDMarty, SLS/MDMS EngineeringMon Apr 07 1997 12:2817
fyi,

In addition to David's description in .2, there is some help in the 
MDMS Guide to Operations (par 4.1.2).  Please reply if you see any
shortcomings in that paragraph (or inconsistencies with David's
description).  

There is one difficulty we are aware of...  the SLS$PARAMS logical
isn't established thru TAPESTART.COM.  So you may have to redefine
this logical after SLS startup completes so that both the VAXes
and Alphas are pointing to the same directory.  If this isn't done,
updates to VALIDATE.DAT (where the pool authorization data is kept)
and ASNUSRBAK (user history update params) will be out of sync between
nodes.  

Marty