Simple Backup Using impdp and expdp provided by Oracle
At present, there is a need to back up the data in the production database to the backup machine at 23 o'clock every night. The database of the backup machine can be used directly the next day. The data is the data in the production database of yesterday. (Currently, there are not many production data records. The total number of data records cannot exceed ).
Since I have never been familiar with advanced Oracle knowledge, I only want to use impdp and expdp tools, so I plan to use these built-in commands and shell scripts.
Assume that the production database ip address is 192.168.1.20 (20 for short) and the backup database ip address is 192.168.1.140 (140 for short)
My idea is that 20 machines will perform automatic backup at, and then try to copy the backup file to 140 machines, and then 140 machines will be imported.
Implementation process:
1. Install the nfs file system in 140 and share the specified directory to 20.
2. Add a backup script on 20 and use crontab to execute the expdp Statement on time.
#! /Bin/sh
ORACLE_BASE =/oracle
Export ORACLE_BASE
ORACLE_HOME = $ ORACLE_BASE/product/
Export ORACLE_HOME
ORACLE_SID = ORCL
Export ORACLE_SID
Export PATH = $ PATH: $ ORACLE_HOME/bin
Export DATA_DIR =/oracle/admin/orcl/dpdump
Export LOGS_DIR =/oracle/admin/orcl/dpdump
Export BAKUPTIME = 'date + % Y % m % d % H'
Export NLS_LANG = american_america.AL32UTF8
Echo "Starting bakup ..."
Echo "Bakup file path/oracle/admin/orcl/dpdump/HJXD _ $ BAKUPTIME. dmp"
Expdp HJXD/hjxd directory = DATA_PUMP_DIR dumpfile = HJXD _ $ BAKUPTIME. dmp schemas = HJXD
Echo "Bakup completed ."
Echo "start delete 10 day before ."
Find/oracle/admin/orcl/dpdump/-mtime + 30-type f-name *. dmp [AB]-exec rm-f {}\;
Echo "end delete 10 day before ."
3. Add a cron task on 20 to copy the backup file to the nfs shared directory.
##! My bash
Myfilepath =/oracle/admin/ORCL/dpdump /;
Filename = HJXD _ 'date-d "1 day ago" + % Y % m % d' 23. dmp;
Cp/oracle/admin/orcl/dpdump/$ filename $ myfilepath
4.140 use the cron task to import the data files under the copied nfs shared directory to the 140 Database
Fullexp. log
PATH = $ PATH: $ HOME/bin
Export PATH
ORACLE_BASE =/oracle
Export ORACLE_BASE
ORACLE_HOME = $ ORACLE_BASE/product/
Export ORACLE_HOME
ORACLE_SID = ORCL
Export ORACLE_SID
Export PATH = $ PATH: $ ORACLE_HOME/bin
Sqlplus sys/123456 as sysdba <EOF
@/Oracle/admin/ORCL/dpdump/impdp. SQL;
EOF
Export BAKUPTIME = 'date-d "a day ago" + % Y % m % d23 ';
Chown oracle: oinstall/oracle/admin/ORCL/dpdump/HJXD _ $ BAKUPTIME. dmp;
Echo "Starting impdp ...";
Echo "impdp file path/oracle/admin/ORCL/dpdump/HJXD _ $ BAKUPTIME. dmp ";
Impdp hjxdsas/123456 directory = DATA_PUMP_DIR dumpfile = HJXD _ $ BAKUPTIME. dmp logfile = fullexp. log remap_schema = HJXD: hjxdsas table_exists_action = replace
Impdp. SQL File
Drop user hjxdsas cascade;
Create user hjxdsas identified by 123456
Default tablespace hjxd
Temporary tablespace temp;
Grant dba, create any trigger, drop any table, select any table, select any sequence, create user to hjxdsas identified by 123456;
Grant connect, resource to hjxdsas;
Grant exp_full_database, imp_full_database to hjxdsas;
At the beginning of step 1, oracle environment variables were set, because the file was copied by the root user (nfs requires users on both sides to have the same uid, 20,140 the uid of the oracle user on the machine is not necessarily the same, and the root user is missing the same). Some errors may occur when the oracle user is used to execute the cron task, therefore, you can directly set the oracle environment variable to root, so that you can directly use root for data import.