Small and Medium databases rman catalog Backup recovery solution (1) Oracle databases are usually the first choice for large databases and large enterprises for database stability, high availability, cross-platform and massive database processing. Despite this, many small and medium-sized enterprises still want to taste Oracle, so there are also many small and medium databases in the Oracle environment. For cost consideration, it is usually possible to build a Standard Edition and run on Linux. Who is Oracle too expensive? For small and medium-sized enterprises, the best choice is reasonable. For those of us who are engaged in DB, there must be a high price. We can also make a few more fights. Haha... the mentality of a typical migrant worker. To put it bluntly, the cost of small and medium-sized enterprises limits our high availability, and RAC and DG are also relatively less. Recently, we have encountered such a situation that we can simulate a replica uard to protect the database. We know that DataGuard can switch the database from the master database to the slave database in real time, or switch the database from the slave database to the master database to achieve seamless connection, so as to avoid data loss caused by hardware faults. The following describes how to use the rman catalog method to simulate the upload uard to protect data to a greater extent based on the above situation. 1. Simulation of replica uard feasibility analysis a. Can I store the entire production database structure in the same structure on the backup server? Yes, hot standby, cold standby, and RMAN backup. Initialize the database with cold backup for the first time. B. Consider the logical physical Standy of DG, that is, do not consider implementing automatic or manual failover. Only the hardware fault of the Prod machine is considered, and the DB is available on the backup server. Feasible. C. Can I minimize database losses? DG can regularly send archivelog and apply automatically, so we can also regularly send archivelog, but it is difficult to apply it automatically. D. Can archivelog be applied to the backup server? Yes. No matter how many tablespaces or data files are added or reduced, there is no problem with data changes. E. The degree of data loss depends on the remaining archivelog and Prod redo logs that are not transmitted in time. This will cause loss. There is no way to do this. After all, it is not DG. 2. backup and recovery plan the following is the database backup plan system environment: Linux, Oracle 10g Standard database environment: the master database is on the Prod server, and the backup database is on the Bak server, database capacity <= 200 GB backup frequency: A level-0 backup is performed every day. You can also perform level-0 backup every two days as needed. Of course, if it is medium or large, we recommend that you use level 1, 1, 2 Incremental backup location: the Prod server places the backup file, and ftp the backup file to the Bak server for archiving: regularly Restore Archive logs to the same archive location as the original database on the Bak server. The recovery frequency is: no recovery operation is performed, because the recovery operation is incomplete and you need to use resetlogs to open the database, a new incarnation fault will be generated: If the master database of the Prod server is damaged, copy the remaining archivelog and redo files to Bak (if possible). Then, manually restore the database on the Bak server and open the recovery directory database. We recommend that you back up the recovered directory database, solution diversity not shown in Table 3. Create a recovery directory database and its scripts because there are many Prod Server databases, so create a recovery directory database. If your environment library is small, you can directly use the control file instead of restoring the directory. Create a global script for backup and recovery based on the recovery directory database for scheduling of all databases.
[SQL] -- all scripts deployed in the recovery directory are listed below -- Note: the backup path is not specified and the default flash-back zone RMAN> list global script names is used; list of Stored Scripts in Recovery Catalog Global Scripts Script Name Description using global_arch global_del_obso global_inc0 global_restore RMAN> print global script global_arch; printing stored global script: global_arch {allocate channel limit type disk maxpiecesize = 2g; allocate channel ch2 type disk maxpiecesize = 2g; set limit channel limit readrate = 10240; set limit channel limit kbytes = 2048000; set limit channel ch2 readrate = 10240; set limit channel ch2 kbytes = 2048000; crosscheck archivelog all; delete noprompt archivelog all; SQL "alter system archive log current "; backup as compressed backupset archivelog all delete input tag = 'archbk '; release channel identifier; release channel ch2;} RMAN> print global script global_del_obso; printing stored global script: global_del_obso {allocate channel specified device type disk; delete noprompt obsolete redundancy 1; release channel failed;} RMAN> print global script global_inc0; printing stored global script: global_inc0 {configure retention policy to redundancy 1; configure backup optimization on; configure controlfile autobackup on; allocate channel specified device type disk maxpiecesize = 5g; allocate channel ch2 device type disk maxpiecesize = 5g; set limit channel primary readrate = 10240; set limit channel primary kbytes = 4096000; set limit channel ch2 readrate = 10240; set limit channel ch2 kbytes = 4096000; backup as compressed backupset incremental level 0 database tag = 'inc'; release channel success; release channel ch2; execute global script global_arch; execute global script global_del_obso;} RMAN> print global script global_restore; printing stored global script: global_restore {restore controlfile; SQL 'alter database mount'; crosscheck backup; delete noprompt expired backup; crosscheck copy; delete noprompt expired copy; allocate channel release type disk; allocate channel ch2 type disk; restore database; release channel release; release channel ch2; shutdown immediate ;}
4. Create an RMAN backup shell script
[python] oracle@BKDB01p:/u02/database/common/rman_scripts> more db_bak_rman_catalog.sh ##=========================================================== ## File name: db_bak_rman_catalog.sh ## Usage: db_bak_rman_catalog.sh <$ORACLE_SID> ## Desc: ## The script uses to backup database with level 0. ##============================================================ #!/bin/bash # User specific environment and startup programs if [ -f ~/.bash_profile ]; then . ~/.bash_profile fi # -------------------------- # Check SID # -------------------------- if [ -z "${1}" ];then echo "Usage: " echo " `basename $0` ORACLE_SID" exit 1 fi # ------------------------------- # Set environment here # ------------------------------- ORACLE_SID=${1}; export ORACLE_SID TIMESTAMP=`date +%Y%m%d%H%M`; export TIMESTAMP LOG_DIR=/u02/database/${ORACLE_SID}/backup export LOG_DIR RMAN_LOG=${LOG_DIR}/${ORACLE_SID}_bak_${TIMESTAMP}.log SSH_LOG=${LOG_DIR}/${ORACLE_SID}_bak_full_${TIMESTAMP}.log MAIL_DIR=/users/oracle/sendEmail-v1.56 MAIL_FM=oracle@BKDB01p RETENTION=5 echo "----------------------------------------------------------------" >>${SSH_LOG} echo "Step 1. Start rman to backup at `date`." >>${SSH_LOG} echo "----------------------------------------------------------------" >>${SSH_LOG} $ORACLE_HOME/bin/rman log=${RMAN_LOG} <<EOF connect target sys/xxx@${ORACLE_SID} connect catalog rman_user/xxx@CATADB resync catalog; run {execute global script global_inc0;} exit; EOF RV=$? cat ${RMAN_LOG}>>${SSH_LOG} echo "" >>${SSH_LOG} echo "=====>MSG1: RMAN backup end at `date`." >>${SSH_LOG} if [ $RV -ne "0" ]; then echo "" >>${SSH_LOG} echo "=====>MSG2: RMAN backup error at `date`." >>${SSH_LOG} $MAIL_DIR/sendEmail -f $MAIL_FM -u "Failed RMAN backup for $ORACLE_SID on `hostname`." -t dba@12306.com -o message-file=${SSH_LOG} exit else echo "" >>${SSH_LOG} echo "=====>MSG2: No error found during RMAN backup peroid at `date`" >>${SSH_LOG} rm -rf ${RMAN_LOG} 2>/dev/null fi echo "-------------------------------------------------------------------------" >>${SSH_LOG} echo "Step 2. Start ftp backupset to backup server at `date`." >>${SSH_LOG} echo "-------------------------------------------------------------------------" >>${SSH_LOG} SRC_DB_BAK_DIR=/u02/database/${ORACLE_SID}/flash_recovery_area/${ORACLE_SID} SRC_ADD=10.1.2.101 TARG_DB_BAK_DIR=/u02/database/${ORACLE_SID}/flash_recovery_area RSYN_LOG=${LOG_DIR}/rsync_${TIMESTAMP}.log # rsync is used to ftp backup set to bak server. rsync -avzSH --progress --delete-after oracle@${SRC_ADD}:${SRC_DB_BAK_DIR} ${TARG_DB_BAK_DIR} >${RSYN_LOG} 2>&1 RV=$? cat ${RSYN_LOG}>>${SSH_LOG} if [ $RV -ne "0" ]; then echo "" >>${SSH_LOG} echo "=====>MSG3: FTP backupset error at `date`." >>${SSH_LOG} MAIL_SUB="Failed archive log sync for $ORACLE_SID on `hostname` at `date`." $MAIL_DIR/sendEmail -f $MAIL_FM -u $MAIL_SUB -t dba@12306.com -o message-file=${SSH_LOG} exit else echo "" >>${SSH_LOG} echo -e "=====>MSG3: No error found during FTP peroid." >>${SSH_LOG} rm -rf $FTP_LOG 2>/dev/null fi echo "-------------------------------------------------------------------------" >>${SSH_LOG} echo "Step 3. RMAN backup and ftp backupset finished at `date`." >>${SSH_LOG} echo "-------------------------------------------------------------------------" >>${SSH_LOG} MAIL_SUB="Sucessful completed for ${ORACLE_SID} RMAN backup and ftp backupset at `date`." $MAIL_DIR/sendEmail -f $MAIL_FM -u $MAIL_SUB -t dba@12306.com -o message-file=${SSH_LOG} # ------------------------------------------------ # Removing files older than $RETENTION parameter # ------------------------------------------------ find ${LOG_DIR} -name "*.*" -mtime +$RETENTION -exec rm {} \; exit
5. Automatic FTP archivelog script
[python] oracle@BKDB01p:/u02/database/common/rman_scripts> more autoftp_arch.sh #!/bin/bash ORACLE_SID=${1}; export ORACLE_SID TIMESTAMP=`date +%Y%m%d%H%M`; export TIMESTAMP LOG_DIR=/u02/database/${ORACLE_SID}/backup #Define FTP variable SRC_DB_BAK_DIR=/u02/database/${ORACLE_SID}/archive SRC_ADD=10.1.2.101 TARG_DB_BAK_DIR=/u02/database/${ORACLE_SID} RSYN_LOG=${LOG_DIR}/rsync_arc_${TIMESTAMP}.log RSYN_ERR_LOG=${LOG_DIR}/rsync_arc_${TIMESTAMP}_err.log rsync -avzSH --progress --delete-after oracle@${SRC_ADD}:${SRC_DB_BAK_DIR} ${TARG_DB_BAK_DIR} >${RSYN_LOG} 2>${RSYN_ERR_LOG} RV=$? if [ ! -s ${RSYN_ERR_LOG} ];then rm -rf ${RSYN_ERR_LOG} 2>/dev/null else mail -s "Failed FTP archive log for $ORACLE_SID on `hostname`" dba@12306.com <${SYN_ERR_LOG} fi exit
6. Deploy the backup script to crontab. If your database is relatively small, deploy the backup script and the automatic FTP archivelog script to crontab. If you have many databases, we recommend that you encapsulate the above script into another file and deploy it to crontab. The following full_bak_by_rman.sh actually contains multiple db_bak_rman_catalog.sh <$ ORACLE_SID>, and the usage class starting with multiple "full" is the principle.
# Rman backup and restore database 0 1 ** 1-6/u02/database/common/rman_scripts/full_bak_by_rman.sh 0 3 ** 1-6/u02/database/common/rman_scripts/scripts # This is the script used to restore # Auto ftp archive log from prod to bak server */16 7-20 ** 1-6/u02/database/common/rman_scripts/full_autoftp_arch.sh