[Search for users]
[Overall Top Noters]
[List of all Conferences]
[Download this site]
Title: | LSM |
|
Moderator: | SMURF::SHIDERLY |
|
Created: | Mon Jan 17 1994 |
Last Modified: | Fri Jun 06 1997 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 803 |
Total number of notes: | 2852 |
800.0. "disaster recovery steps " by COL01::LINNARTZ () Sun Jun 01 1997 15:39
Hi,
we gotta describe the disaster recovery action for a couple of
SAP servers db's are running in ASE env. As we're using ADSM as a
backup tool for everything then root/(ehm swap) and usr, we only
have to describe the steps needed for setting up root/usr and creation
of the needed volumes. ADSM will restore the rest.
Current OS Version will be V3.2G, we'll have to do the same for V4.X
in the future.
root and usr are ADVFS domain's on top of two mirrored LSM volumes.
Let me name them rz0 and rz1 for simplicity, in real life they're
on different controllers.
I also don't want to get to much into the additional planning details
like description of activities, used and number of media types,
location where to put those etc.
We intend to use vdump to save the FS in question.
We also assure that the backups have additional description available
like Hardware configuration and Console output
use, configuration and layout of filesystems,
existing disklabels
sys_check output and what Software including version/revision is
running.
We would like to have the new hardware configured the same as the
original, but as we got the listings mentioned above but as a minimum
we require sufficent storage to backup all data and the tape being
of the same family as teh one being used for dumping
I'll now describe the step's i thnk are needed to get the system up
again. If somebody knows better way's, I'll appreciate to hear about
As I don't have a machine to check, I'll hope all commands are right,
I'll have to verify if I can get my fingers on a spare machine
- check console output and select devices.
- check revisions of the devices, if needed upgrade to the requested
versions (mainly due to ASE)
- boot OS CD of OS you're installing (same OS mainly due to ADVFS SMP
diff's)
- select system management option. Create device entries needed (cd
/dev; ./MAKEDEV rz0 rz1 tz0)
- create default disklabel for rz0 (disklabel -rw rz0 rz29)
- create ADVFS domain/fileset (mkfdmn -r -t rz29 /dev/rz0a root_domain,
mkfset root_domain root)
- mount root domain (mount -t advfs root_domain#root /mnt)
- restore FS (vrestore -x -D /mnt)
- Convert disklabel to ADVFS (disklabel -r /dev/rrz0a > /tmp/rz0_label.sav
disklabel -t advfs -r -R /dev/rrz0a /tmp/rz0_label.sav rz29)
- check
/mnt/etc/fstab (root_domain#root / advfs rw 1 1),
/mnt/etc/syscobfigtab, (lsm_rootdev_is_volume = 0)
/mnt/sbin/swapdefault (link to /dev/rz0b)
/mnt/etc/fdmns/root_domain (link to rz0a)
Those checks are needed as the restauration would have restored
the original LSM rootvol values, and I guess the fastest way is to
restore it to an normal advfs domain and rencapsulate it afterwards.
********* thing I'm uncertain about ***********
- do I have to comment/remove the /mnt/sbin/inittab lsm entries
- do I have to remove things under /etc/vol
I can't use volXXX commands here, and I would think that for a
clean reboot the changes made above are sufficent, but if it would
ease the encapsulation of .root, I would already remove things here.
- if /usr will also be on the disk
I don't want to restore it right here, as I want to see LSM mirrored
being up first. If we fail with the mirror task, w have to start over
again, and by this the restoration of /usr would be a waste of time.
disklabel -e and replace "unused" with "BSD4.1" in partition g.
Check that if root/swap are contiguous thatthere are 2 "unused
partitions avail, in case of not being contigous, there has to be
3 "unused" partition needed for encapsulation process. I hope that
the BSD4.1 will reserve the g partition for later use of /usr
******************** end *******************
- boot restored disk into single user
****************** start *******************
- as all the volume configuration info is useless at this point (all
disks are empty labeled at best) I would like to perform just
- hostname <XXXX>
- volinstall (should fail as devices are already there)
- vold -m disable ( enable commands to talk to Mr. kernel)
- voldctl init ( init volboot as the saved info is useless)
- voldg init rootdg ( init the dg needed for root/swap)
I guess I have to remove the rootdg first, that's why i asked
above if it's worthwhile to remove things before)
- or would it be better to
- voliod set 2
- vold -m disable -r reset ( enable talking to Mr kernel and reset
it's current state)
- voldctl init (init volboot)
- voldctl list (make sure noone's there)
- voldctl add disk rz0
- if inittab entry was commented, uncomment that vold is started by
default
******************* end ******************
- encapsulate root_domain#root (volencap rz0)
- /sbin/shutdown -r +1 (boots twice and performs the filechanges)
- add the rz1 disk for mirroring (voldctl add disk rz1)
- assure all labels are of type "unused" (disklabel -wr rz1 rz29)
- volrootmir rz1
- reboot system and check that mirroring works and that all references
point to the volumes (fstab,swapdefault, fdmns, sysconfigtab)
- shutdown system, set bootdef_dev path and check that both are
bootable (have valid config entries stord on disk)
- boot and follow manual to restore /usr
- run script to configure additional volumes needed
Sorry if my questions are obvious, but I ddin't find this scenario
being covered in the Appendix B in V1.2A manual nor in the V4.0
manuals. Simple reinstalling the OS would also be a way, but then
you would have to add all patches, and all setup data. In my view
this takes far more time, and it's not what the customer understands
by restoring a backup. I just want to save the time of trying all the
various scenarios, but as already said, if you know better way's
I would love to hear about.
thanks in advance
Pit
T.R | Title | User | Personal Name | Date | Lines
|
---|