[Search for users]
[Overall Top Noters]
[List of all Conferences]
[Download this site]
Title: | Oracle |
Notice: | For product status see topics: UNIX 1008, OpenVMS 1009, NT 1010 |
Moderator: | EPS::VANDENHEUVEL |
|
Created: | Fri Aug 10 1990 |
Last Modified: | Fri Jun 06 1997 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 1574 |
Total number of notes: | 4428 |
1547.0. "Upgrading OS - what does OPS do??" by DYOSW5::WILDER (Does virtual reality get swapped?) Thu Apr 17 1997 12:06
Here is an interesting situation:
We need to do a rolling upgrade of DUNIX in a cluster that is running
OPS. The documentation states that the node on which the upgrade will
happen must have it's instance shut down, thereby leaving Oracle up on
the remaining node.
Now, in order to do the upgrade of the OS, I must delete the cluster
member. When I do this, the surviving member must modify the drd
services (we are using preferred service with all the node names listed
in our preference). When ASE modifies a service, it deletes the service
and then adds it back in. When it adds the service back in, it sets all
the ownerships on the drds to root.system.
My question: If Oracle OPS is running on a single surviving node, what
will happen if the drd services disappear, and then re-appear with the
wrong ownerships (not oracle.dba as is required)? Will Oracle have some
heartburn? Is this a workable situation? Or, must I shut down Oracle on
the surviving member while the cluster re-configures itself and we
reset the ownerships by hand? If this is true, it negates our claim of
"never having the database go down while upgrading individual nodes".
Any ideas on this?
Thanks,
/jim
T.R | Title | User | Personal Name | Date | Lines |
---|
1547.1 | | UTRUST::PILMEYER | Questions raise the doubt | Fri Apr 18 1997 04:58 | 23 |
| One way around this is to do what I did. But it has some adverse
effects also.
Instead of using the /dev/rdrd/... device specials I had oracle using a
NFS shared directory in which I created the drd device specials (using
the supplied script) and make symbolic links to those. The symbolic links
would have the names of the underlaying LSM volumes and suddenly
everything got a lot more understandable. I wrote a script that did
this automatically including setting the device specials to the correct
protection.
When the service is modified the cluster software never touched my
device specials. Which helps in the case you're investigating. Of
course when you change the drd configuration you have to remember to
run the script.
The bad news is that on an upgrade the major number for the drd's can
change (it did on the 1.0 to 1.4 upgrade) and then you're using the
wrong major number on the systems that have been upgraded (or the other
ones). In fact the major numbers can even change on a reboot. But there
are ways around that too.
-Han
|