Oracle
Backup for UNIX Administrators
Bill Mansfield
Database backup and recovery requires both database administration
and systems administration skills. Sometimes backups are neglected,
are not done in a way that allows recovery, take too long, or become
too disruptive. This article is intended to provide systems administrators
with enough background to find out whether they're getting
good backups for their databases, and if not, provide a starting
point for getting them in the future.
The Korn shell scripts discussed here are typically executed as
the Oracle user. They need to have the authority to read the various
tables, and must have access to read the database files. The database
should be up when you start. Oracle version 8.x is assumed. You'll
need to add the usual code to set ORACLE_HOME and ORACLE_SID, do
logging, check return codes, maintain multiple versions, protect
passwords, etc.
Cold (Offline) Backups
In many ways, a cold backup is the simplest solution. Bring the
database down, copy the database and control files, and bring the
database back up. Then use your regular backup software to send
the files to tape for safekeeping. You don't even need archive
log files. But, first the script must ask the database what the
necessary files are:
BACKUPDIR=<backup destination> # directory where the saved files go
# get the files to back up, and shut down the database
svrmgrl <<! > /dev/null
connect internal
set termout off
spool /tmp/ctlfiles;
select name from v\$controlfile;
spool off;
spool /tmp/dbfiles;
select name from v\$datafile;
spool off;
shutdown immediate
exit;
!
# Copy the data files. The input file has 2 header and 1 trailer
# lines to delete
sed -e "1,2d" -e "\$d" /tmp/dbfiles | while read FILEPATH
do
cp $FILEPATH $BACKUPDIR/$(basename $FILEPATH)
done
# copy the control files. Same file format as above
sed -e "1,2d" -e "\$d" /tmp/ctlfiles | while read FILEPATH
do
cp $FILEPATH $BACKUPDIR/$(basename $FILEPATH)
done
# bring the database back up
svrmgrl <<! > /dev/null
connect internal
startup;
exit;
!
rm /tmp/ctlfiles /tmp/dbfiles
Hot (Online) Backups
Hot backups are pretty much like cold backups except that the database
is never brought down. Each tablespace is put in backup mode while
you back up its datafile. You must run the database in archivelog
mode and take pains to back up the archive log files, because the
different datafiles aren't backed up consistently. During a database
recovery, Oracle will have to play transactions from the log files
to get everything back in sync:
BACKUPDIR=<backup destination> # directory where the
# saved files go
# get the files to back up
svrmgrl <<! > /dev/null
connect internal
set termout off
spool /tmp/dbfiles;
select file_name,tablespace_name from \
sys.dba_data_files;
spool off;
exit;
!
sed -e "1,2d" -e "\$d" /tmp/dbfiles | while read FILEPATH TABLESPACE
do
# put tablespace in backup mode
svrmgrl <<! > /dev/null
connect internal
alter tablespace $TABLESPACE begin backup;
exit;
!
# copy the file
cp $FILEPATH $BACKUPDIR/$(basename $FILEPATH)
# take tablespace out of backup mode
svrmgrl <<! > /dev/null
connect internal
alter tablespace $TABLESPACE end backup;
exit;
!
done
# back up the control file
svrmgrl <<! > /dev/null
connect internal
alter database backup controlfile to '$BACKUPDIR/control01.ctl';
exit;
!
rm /tmp/dbfiles
Export (Logical) Backups
Hot and cold backups are great if you suffer physical damage to your
system. Unfortunately, if you just lose or corrupt a table, it is
very hard to restore from a physical backup because there is no easy
way to extract one table. For that, you need an export. For a full
export, you just need to decide whether you want to kick out your
users to make sure that you get a consistent export. If users are
making changes, each table is consistent, but different tables will
not be consistent with each other:
EXPORTDIR=<export destination> # directory where the saved files go
# shutdown the database to restricted mode
svrmgrl <<! > /dev/null
connect internal
shutdown immediate;
startup restrict open;
exit;
!
# do the export
exp username/password file=$EXPORTDIR/fullexport.dat full=yes
# let the users back in
svrmgrl <<! > /dev/null
connect internal
shutdown immediate;
startup;
exit;
!
One issue with export is that the file parameter must refer to an
actual file. You cannot directly compress the file or send it to tape.
Often, the export file is so large that it will not fit in the available
space or will be too large to be handled by the operating system at
all. In that case, use a named pipe to intercept the data:
# create the named pipe if it does not exist
[ ! -f /tmp/export_pipe ] && mknod /tmp/export_pipe p
# start the data receiver; assume rmt0 is ready to go
compress -c < /tmp/export_pipe > /dev/rmt0 &
# start the export
exp username/password file=/tmp/export_pipe full=yes
Recovery Manager
You would think that a big company like Oracle would have provided
a way to back up their flagship product, right? And, they have --
Recovery Manager (RMAN) is their answer. It has some important advantages:
- Support -- Vendor-written products tend to be enhanced
and maintained in new product versions.
- Management -- RMAN uses a catalog to keep track of backup
versions.
- Ease of use -- RMAN scripts are pretty simple to manage,
and you could turn the whole thing over to the DBAs. Better still,
RMAN makes restores easier. If you tell it to restore a tablespace,
it will automatically find and restore all the underlying datafiles.
- Parallel streams -- RMAN can be configured to back up more
than one object at once. You can do this with scripts, of course,
but RMAN makes it easy.
- Block level backup -- You can incrementally back up just
the changed blocks since the last full or incremental backup.
- Reporting -- RMAN has basic reporting capabilities.
There are also some disadvantages:
- The catalog -- RMAN uses a backup catalog, which is implemented
in Oracle database tables. This could be in the instance being
backed up, but that defeats the purpose of the backup. It is best
to have a small, dedicated catalog database, that you can back
up with a cold backup script (see above).
- Skill requirement -- You really need to have a DBA involved.
- RMAN scripts -- There is a whole new scripting language
that somebody has to learn, program, and test.
- No export capability.
Using RMAN
The first step is to create the recovery catalog, which RMAN uses
to store names and versions of saved database files. This is mildly
complicated, because a new instance should be created and can then
be backed up with a cold backup. Your DBA ideally should create
this based on Oracle documentation. Assume the DBA has done this
in a database that you can access through the SQL*Net alias "catalog",
user "rman_cat", password "carol". The DBA has
registered your production instance ("orc") into the catalog
and created a user ("bkup") password ("amanda")
in instance "orc" that has the authority to back stuff
up. You can then execute RMAN scripts by typing this command:
rman80 target=\"bkup/amanda@orc\" rcvcat=\"rman_cat/carol@catalog\" \
cmdfile=file
A full backup script must allocate a channel that identified a disk
or tape, backup the database, snapshot the current online redo log
using an embedded SQL command, and back up the archive logs (control
file backups are taken automatically):
run {
allocate channel ch type disk;
backup full filesperset 3 format 'full_%d%p_s' (database);
sql 'alter system archive log current';
backup filesperset 3 format 'log_%d%p_s' (archivelog all);
}
RMAN doesn't automatically delete old backups -- you must
script RMAN delete commands. After you have used the RMAN "report
obsolete" command to identify an obsolete backup "full_orc1_1",
you can delete it with:
run {
allocate channel for delete type disk;
change backuppiece 'full_orc1_1' delete;
}
You could just remove the file from the directory on the disk, of
course, but that would make the RMAN catalog out of date.
RMAN has a wide variety of other capabilities to exploit. You
can back up individual tablespaces, datafiles, control files, etc.
Look at the Oracle RMAN documentation or the books at the end of
this article for further information.
BMC SQL BackTrack
Before RMAN, there was SQL BackTrack. Originally developed for
Sybase, this tool makes Oracle backup and recovery simple enough
that strong DBA talent is not required. In particular, BackTrack
can analyze a damaged database, suggest what is needed to recover
it, and then perform the entire operation automatically. Unlike
RMAN, the backup catalog is kept as a normal file that can easily
be backed up. It manages physical (hot or cold), block level, and
export backups. It implements policies for keeping backup versions
(automating deletion of old versions, a sore spot with RMAN), and
generally makes Oracle backups completely painless.
SQL BackTrack implements profiles, which represent a particular
backup operation (offline full, online incremental, export, etc.).
A profile is created using a curses-like menu interface that gathers
all the information required to do what you want. You then have
the option of dry-running the profile (no backup taken), executing
the profile interactively, or saving the profile invocation as a
script. You would typically do the latter once you have everything
working, and execute the script from cron or your favorite scheduler.
Unfortunately, whereas the preceding methods are free (other than
your labor), SQL BackTrack must be purchased separately from BMC.
Integration with Tape
No backup is complete until the data is on tape in the vault.
You should plan to integrate the Oracle backup with your current
backup system, assuming it can handle the volume of data. All of
the major enterprise backup software packages, including IBM Tivoli
Storage Manager, Veritas NetBackup, and Legato Networker can provide
media management services for Oracle backups.
If you are writing your own shell scripts, you are pretty much
on your own. You can either back up the data to OS files and then
back up the files, or you can explore sending the files directly
to tape via utility interfaces or media manager APIs.
RMAN is designed to integrate with external media managers. After
you've got the plumbing set up (you will typically need to
purchase an additional agent from your backup software vendor),
you merely allocate the channel differently:
allocate channel t1 type 'sbt_tape' parms '...';
allocate channel t2 type 'sbt_tape' parms '...';
This allocates two channels, which RMAN will exploit to send multiple
parallel streams to the media manager. Parameters are used to manipulate
the media manager. Processing multiple channels directly to tape can
cut your backup time dramatically.
BMC SQL BackTrack integrates with external media managers through
Open Backup Stream Interface (OBSI) modules. Multiple channels are
specified for parallel backups within the definition of the profile
using backup pools and pool groups. Check with BMC to make sure
there is an OBSI module that fits your combination of platform,
OS version, database version, and backup software.
Instant Backups
If your database is sufficiently big, busy, or highly available,
you may find that you can't afford to run the backup on your
production platform at all. Even an online (hot) backup imposes
CPU and I/O load on the system. Copying several terabytes of data
to local tape (or worse, across your LAN to your central tape backup
server) is often not feasible.
The modern trend is to place very large databases in external
storage servers from vendors such as IBM, EMC, or HDS. These boxes
can effectively house many terabytes of highly available storage
in a single cabinet, and provide some management services. When
attached via Fibre Channel Storage Area Networks (SANs), very high
data rates are possible. In the not too distant future, it should
be possible to copy database data directly from disk to tape across
the SAN, but this isn't generally practical yet.
All the major storage servers implement some kind of instant copy
facility where a precise point-in-time image can be created. This
can be used to back up even the largest databases with almost no
production system impact. Even cold backups can be created with
minimal downtime. The general procedure is:
- Prepare the volumes as necessary. For example, EMC TimeFinder
requires synchronizing Business Continuity Volumes (BCV) in advance.
- Quiesce the database objects. Either shut it down or put the
tablespaces in backup mode.
- Snap off the copy. With EMC, you break the pre-established
mirror. With IBM ESS Copy Services you create a Time Zero Copy,
which instantiates a copy volume. This happens in seconds.
- Unquiesce the database objects.
- Mount the copied volumes to your backup server and back up
the data.
One advantage of this approach is that the point-in-time copy
can be a full copy of the database. In response to a disaster, this
copy can quickly be put online and refreshed from archive logs.
Fully realized commercial solutions are available for a few combinations
of platform, storage server, and backup software, but unless you
are starting from scratch, you probably have considerable investigation
and implementation work ahead of you.
Selecting a Backup Solution
Choosing an Oracle backup solution requires careful consideration
of your current and projected database configurations and resources.
Some considerations are:
- If you're running a commercial application (e.g., PeopleSoft,
SAP), get recommendations from the software vendor. Getting a
good backup may require preparing the application in some way,
or getting coordinated backups of flat files in addition to the
database. SAP, in particular, has an integral backup system that
you should use.
- Use backup scripts only if your DBAs won't adopt RMAN,
or if you aren't running Oracle version 8.x. Your DBAs should
be responsible for database backups; they will ultimately be responsible
for recovery. Let your DBAs decide about exports because only
they will be able to restore from them.
- If your databases are small (under 100 GB) and permit downtime,
do cold backups. If the allowed downtime doesn't permit cold
backups, do hot backups. If your databases are very large (over
500 GB), your only reasonable alternative may be an "instant"
backup.
- If your database environment is large, complex, and fluid,
consider using SQL BackTrack. You should be able to justify it
on administrative savings alone.
- Integrate Oracle backup into your existing backup solution
only if it is reasonable. Your current backup solution may not
have the capacity to support the data volume generated by Oracle
backups.
Summary
Databases are ubiquitous, and growing larger every year. Most
large organizations have databases encompassing tens of gigabytes,
and very large databases of hundreds of gigabytes or even terabytes
are becoming more common. E-commerce applications riding on Oracle
databases can be created almost overnight, and generate staggering
amounts of data. These present very high availability and rapid
restore requirements. The price of disk is plummeting, while the
speed of network and tape is increasing much more slowly. Your chances
of encountering a database backup challenge are excellent.
Finding out whether you have a working database backup system
can be tough. Existing applications are often crusted with layers
of infrastructure that obscure basics like backup. Talk to the DBAs.
Look for recovery procedures and work backwards. Make sure that
the backup system fulfills the service-level objectives for the
application and ensure that you can recover the server OS and other
infrastructure.
Select your backup strategy with care -- considerable investment
is required and you will have to live with it for a long time. With
luck, you will have sharp DBAs, sufficient lead-time, and management
willing to spend money. Develop the most robust strategy you can
within your organization's budget, capability, and buy-vs-build
philosophy.
Further Reading
Oracle8 Backup & Recovery Handbook, Rama Velpuri and
Anand Adkoli, Osborne, 1998.
Unix Backup & Recovery, W. Curtis Preston, O'Reilly
and Associates, 1999.
Bill Mansfield is a practice lead with Solution Technology,
Inc., responsible for developing storage and backup solutions. He
has administered various UNIX flavors since the late '70s,
and has developed backup solutions for several Fortune 100 companies.
He can be reached at: wmansfield@solutiontechnology.com.
|