[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference ulysse::rdb_vms_competition

Title:DEC Rdb against the World
Moderator:HERON::GODFRIND
Created:Fri Jun 12 1987
Last Modified:Thu Feb 23 1995
Last Successful Update:Fri Jun 06 1997
Number of topics:1348
Total number of notes:5438

1215.0. "Oracle and partitioned logs???" by NOVA::SPIRO () Thu Dec 17 1992 15:45

Oracle question:

Does Oracle allow their after-image log (or redo log, or whatever it's called)
to be partitioned for a single database? In other words, if Oracle is being
used in a cluster, can it be set up so that each node in the cluster writes to
it's own distinct after-image log, thereby eliminating the end-of-file hotspot.
Obviously this then requires some sort of merging mechanism. 

Now I understand that they do have some mechanism which lets them rollover 
from one log to another, but that's different, it still only allows
one log to be active at any one time.  My question above implies that
there would be multiple active logs for one database.

Does anyone know if any database product allows this?

Thanks for any info.

-peter
T.RTitleUserPersonal
Name
DateLines
1215.1as far as I can tellGNROSE::HAGGERTYDigital Svcs - GIA EIS/SWS, Acton MA.Mon Dec 28 1992 16:5416
    Peter, Oracle7 lets you have a mirrored log file, i.e. a copy
    maintained by the LGWR process to remove the single point of failure
    problem they had in prior versions.
    
    You can also spool the journal to some other device through another
    process (ARCH) after closing the old one and opening a new one.
    
    They do write sequentially, but use fast commit, group commit and
    deferred write to lessen the I/O load to the log (their words, not
    mine, but the concepts are similar to KODA's capabilities).  Number one
    performance issue in their tuning guide is to place all log files on
    different disks. 
    
    But partitioned log files?   Sybase and Informix don't have anything
    like that either, as best as I can tell from their intro doc.
    
1215.2What about Parallel Servers?NOVA::BERENSONDatabase Architecture, Standards, and StrategyMon Dec 28 1992 17:324
A more precise version of the question might be:

When using Oracle's Parallel Server capability, does each node have its
own log file or do they all write to a single log file?
1215.3still one as best as I can tellGNROSE::HAGGERTYDigital Svcs - GIA EIS/SWS, Acton MA.Tue Dec 29 1992 17:296
    Hal, even with the Parallel Server, there is still one log file (and
    maybe a mirror).  
    
    This is straight out of their "ORACLE Parallel Server in the Digital
    Environment" brochure.
    
1215.4Better late than neverKILARA::DEANGELISMomuntaiFri Jan 22 1993 12:3712
I believe that Kevin has it right - one logical logfile, multiple active 
physical logs with similar info. The following is from a technical summary
of V7...

"Multiplexed Log Files
 
Redundant log files may be maintained on multiple disk devices to
provide additional protection against media failures.  All writes
to log files are done in parallel so that there is no loss of
performance. "

John.
1215.5Partioned logs required for parallel server !!!TPSYS::ABBOTTRobert AbbottWed Mar 31 1993 00:2166
The answer is: Oracle V7 parallel server *requires*
that the redo log be partitioned over serveral files.
At least this is true for Oracle on VMS.

I have not learned anything yet about how recovery 
is performed. Recovery from instance (node) failure
seems straight forward. Recovery from media failure
would be harder. 
 
I have two sources: one is a Digital engineer who has worked with
Oracle V7 in a cluster configuration. The second is a guy
who tends an Oracle parallel server configuration on a
VAXcluster for the Eli Lilly Co.  I have included 
parts of our mail exchange below. Many thanks to
Brett E. Bowman of Eli Lilly & Company for his help.
====================================================
Brett:
>>This is not EXACTLY true.  When you run in parallel mode, each
>>instance uses  a different set of log files groups.  Thus, each
>>instance has a different  active log file at any given point in time. 
>>This behavior is a change from parallel mode under V6.  The V6
>>behavior was to have all instances share the same  active redo log.
>
Rob:
>Yes the multi-node case is what I was driving at. So the question
>is:  Can Oracle split the logging for a *single* database into multiple
>files? (I am talking about data stream splitting as opposed to 
>copying which is done in mirroring. Let's forget about mirroring
>for the moment.)  You seem to say YES above. This can be done
>using multiple instances on multiple nodes. Each instance writes
>to its own log file. 

Brett:

Actually, not only CAN it be done, but it MUST be done this way.  Under
Oracle7, each instance MUST have it's own sets of redo log files.

I believe that Oracle's terminology is as follows:
	You make groups of redo log files.  Each member of a group is the
		same size and data is written to the group which is actually
		written to each member of the group.  (This is mirroring of
		the redo log files.)
	You then put sets of groups into threads.
	Each instance acquires a redo log thread at instance startup time.
		(The thread number can either be specified in the instance
		specific INIT.ORA file, or it can be taken from a "public pool"
		of threads.)
	So, you have something like the following:

	Instance1-------Thread1---------Group1----------File1
						\-------File2
					Group2----------File3
						\-------File4
	Instance2-------Thread2---------Group3----------File5
					Group4----------File6

	File1 and File2 are mirrors.  File3 and File4 are mirrors.  Instance1
	alternates between writing to Group1 and Group2.  Instance2 alternates
	between writing to Group3 and Group4.

I believe that all of this information is correct.  I have migrated one of our
databases from V6 to Oracle7, and this information reflects what I had to do to
setup the parallel server stuff.  (I found the documentation to be rather
lacking in info about the redo logs for the parallel server.  It took some
trial and error to get it all working.)