T.R | Title | User | Personal Name | Date | Lines |
---|
9005.1 | | BIGUN::nessus.cao.dec.com::Mayne | Churchill's black dog | Sun Mar 02 1997 21:25 | 6 |
| None of them will correctly back up an open, working database.
Depending on version/OS/utility, they may or may not correctly restore files as
sparse/non-sparse.
PJDM
|
9005.2 | What is the common practice? | MANM01::JOELJOSOL | | Sun Mar 02 1997 22:09 | 9 |
| Re .1
My telco customer says that he lets Oracle back it up using Archive
then he does cpio/tar/vdump since he is on a lower version (pre-V7.3).
But, somehow they get problems backing the archive. Is this true
elsewhere? What is the common practice in other Digital sites
running Oracle/Unix preV7.3?
/joeljosol
|
9005.3 | Backing up database files | DECWET::MARTIN | | Mon Mar 03 1997 14:12 | 46 |
| Here's a quick summary of database backup limitations. I'm typing this in
mainly because it's easier than trying to find all the relevant notes (most of
which exist in the ADVFS_SUPPORT notes conference).
The first recommendation is to always use the backup utility supplied with the
database (Oracle, Sybase, etc.), regardless of whether the database is on AdvFS,
UFS, or raw disk. That's because that utility is aware of what the database is
doing, and can make the backup consistent.
If for some reason that option is not possible, when using cpio, dd, tar, vdump,
dump, networker, or any other "backup" utility that is not part of the database
you're using, the steps to follow are the same:
1) Shut down the database. Simply "quiescing" the database (at least in
Sybase, I'm not sure about Oracle) doesn't necessarily write all the db-cached
data out to disk. You must make sure that all of the db's data is actually
written out to disk, and the database is not accepting any changes.
2) Run the backup.
3) Start up the database.
Yes, this means that your database will not be available for (possibly) several
hours while the backup is running. However, this is the only way to get a
consistent backup that can restore properly. There are only two ways (that I
know of) to run a daily backup while keeping the database available.
1) Use the database's built-in backup utility.
2) If on AdvFS, use "clonefset" to clone the database, which takes a few
seconds at most, then you can restart your database while backing up the cloned
fileset to tape.
The only other "gotcha" I know of deals with sparse files. Some utilities
handle backups properly, while others don't. I don't recall all the details
offhand, and I know that they vary as to the version. (I know vdump/vrestore
changed considerably between 3.2x and 4.0.) If you have a specific config you
want to know about, I can try to look it up. Or you can check the man page(s)
and release notes, which are usually pretty good.
One more comment: Every site *should* test their backup facilities by restoring
one of their backups and trying to use it. They should do this test *long*
before the original starts to show problems. This is the only way to know if
backups are being done properly.
--Ken
|
9005.4 | Also beware of size limitations w/utilities | DECWET::DADDAMIO | Design Twice, Code Once | Mon Mar 03 1997 16:02 | 6 |
| For large databases there may also be limits on the amount of data a
utility will backup. For tar, I believe it's 8GB. I don't know about
cpio. vdump will handle very large files (hundred of GB).
With that said, I will second Ken's recommendation of using the
database backup utility if it's at all possible.
|
9005.5 | background info on tar limits | DECWET::KEGEL | Andy Kegel, DECwest | Tue Mar 04 1997 15:58 | 28 |
| Just in case you wanted to know how Jan got her 8 Gby number --
According to the man page for tar (in Section 4: man 4 tar ), tar
stores the file size in a 12-character field. This is an ASCII field
that contains an octal number, but one character is consumed for the
trailing null (or is it a space? anyway, one character is consumed),
leaving one with a theoretical limit of 077777777777 (that should be a
string of eleven sevens), or approximately 1x8^11 or about 2^33, or
about 8 Gby.
The cpio command has a vaguely similar limit, I just can't find the
reference materials necessary to figure it out.
Since tar (and cpio) uses a read/write interface, it is unaware of
sparse files; if you write a sparse file to tape, it will consume the
bytes. (Remember, the only way to create a holey or sparse file is to
use lseek; write operations alone can never create holes.) To keep
performance up, commands like pax, tar, and cpio never examine the
data, so they can never determine where holes could or should be.
The dump command reads the raw device - it mucks around in the on-disk
structures, so it knows where the holes are and can reproduce them.
I don't know enough to say anything about vdump.
More than you ever wanted to know...
-andy
|
9005.6 | | VAXCPU::michaud | Jeff Michaud - ObjectBroker | Tue Mar 04 1997 17:56 | 8 |
| > ... stores the file size in a 12-character field. This is an ASCII field
> that contains an octal number, but one character is consumed for the
> trailing null (or is it a space? anyway, one character is consumed),
These days it's a space. In older days (such as ULTRIX) it used
to be a nul.
Probably changed for POSIX conformance .... (?)
|