Oracle basic database backup and recovery, oracle Database Backup
I. Why data backup?
Main causes of data loss:
1. Media fault.
2. User error operations.
3. server crash.
4. computer viruses.
5. unpredictable factors.
There are four fault types in Oracle.
1. Statement fault:
A logical fault during SQL statement execution can cause a statement failure. If the SQL statement you have compiled is invalid, the statement fails. Oracle can self-repair statement faults, revoke the impression generated by the statement, and give control to the application.
2. User process faults
When a user program fails and cannot access the Oracle database, a user process failure occurs. A user process failure will only make the current user unable to operate the database, but will not be impressed with other user processes. When the user process fails, the Process Monitoring Program (PMON) will automatically execute process recovery.
3. instance fault
When an Oracle database instance cannot continue running due to hardware or software problems, an instance failure occurs. Hardware problems include unexpected power outages, and inter-region problems may be Server OS crashes. If an instance fails, Oracle automatically repairs the instance. When the instance recovers the database to the same state as the transaction before the fault, Oracle automatically rolls back the uncommitted data.
4. Media faults
A media fault occurs when a database file or disk cannot be read or written.
Ii. Backup
Backup is to create a database copy to the disk. The categories are as follows:
1. classification from the physical and logical perspectives:
(1) physical backup: physical backup of database operating system files (such as data files, control files and log files. Physical backup can be divided into offline backup (cold backup) and online backup (hot backup). The former is performed when the database is closed, the latter backs up databases running in the archive log mode.
(2) logical backup: Back up database logical components (such as tables and stored procedures and other data objects.
2. database backup policy classification:
(1) Full backup: complete data backup is performed each time.
(2) Incremental Backup: files whose values are modified after the last full backup or Incremental Backup are backed up. The advantage is that the backup data volume is small and takes a short time. The disadvantage is that the backup record must be depended on during recovery, which is highly risky. For example, if a full backup is performed on Monday, Incremental backup is performed from Tuesday to Friday. If the data on Friday is damaged, the data recovery requires full backup on Monday and all Incremental backup from Tuesday to Friday.
(3) Differential backup: Back up files that have been modified since the last full backup. Therefore, it takes a long time to recover data from a differential backup. Therefore, only two copies of data are required (the last full backup and the last differential backup). The disadvantage is that each backup takes a long time. For example, if a full backup is performed on Monday and a differential backup is performed from Tuesday to Friday, if the data on Friday is damaged, the data recovery only requires a full backup of the week and a differential backup on Thursday.
Differences between Incremental backup and differential backup: Incremental Backup requires that all Incremental backup data be retained; differential backup only requires that the data of the last differential backup be retained.
Iii. Recovery
After a fault occurs, the recovered data file or control file is used to re-establish a complete database. There are two types of recovery:
1. instance recovery: When an Oracle instance fails, Oracle automatically recovers.
2. Media recovery: recovery when the media in the database fails. Media recovery is divided into full recovery and Incomplete recovery.
Full recovery: restores the database to the state when the database fails.
Incomplete recovery: restores the database to the status at a certain time point before the database fails.
Iv. Export
(1) Using Data Pump Technology:
1. expdp export method:
1) database mode: the entire data is imported into the operating system file.
2) User Mode: exports all data and metadata of one or more users.
3) Table mode: all data and metadata of a group of tables are everywhere.
4) tablespace: it is used to extract all data and metadata in a tablespace. In addition, it also extracts any dependent objects dependent on all objects in the specified tablespace list.
Dump File: A dump file is a dump file created by a Data Pump program. All dump files are called dump files during a single data pump export job.
2. Export based on command line
Syntax:
C: \ expdb system/password directory = pbdir dumpfile = pb. dmp full = y tables = table_list tablespaces = tablesapce_list schemas = schema_list remap_schema = user1: user2 nologfile = y sqlfile = pb. SQL
System/passwrod: user name and password
Directory: Database directory object
Dumpfile: Specifies the dump file
Full = y: indicates all imports performed.
Tables = table_list: indicates the list of imported tables.
Schemas = schema_list: indicates the imported user mode.
Tablespaces = tablesapce_list: The imported tablespace.
Remap_schema = user1: user2: import from user1 to user2
Nologfile = y: indicates no log operation.
Sqlfile: Write the metadata (DDL statement) to the specified file.
Note: to export a full database, you must have the exp_full_database permission.
1) database method:
Note: to export a full database, you must have the exp_full_database permission.
Expdp scott/scott @ accp directory = dump_dir dumpfile = full. dmp full = y
2) User Mode:
Expdp scott/scott @ accp directory = dump_dir dumpfile = scottschema. dmp schemas = scott
3) Table export method:
Expdp scott/scott @ accp directory = dump_dir dumpfile = tables. dmp tables = emp, dept, bonus, salgrade content = data_only
Note:
Content = data_only: data_only indicates that only the data in the table is exported without the metadata; metadata_only indicates that only the metadata is exported instead of the data in the table. If no data is written, both are exported.
4) tablespace export method:
Expdp scott/scott @ accp directory = dump_dir dumpfile = tablespace. dmp tablespaces = users
3. DBMS_DATAPUMP for data pump export
This method is more troublesome than using commands directly, but it is convenient to arrange the running schedule of the data pump export job from the Database Job Scheduling, it provides better function and control for data pump export.
Declare -- create a data pump working handle h1 NUMBER; begin -- create a user-defined Data Pump for schema backup. H1: = DBMS_DATAPUMP.open (operation => 'Port', job_mode => 'scheme'); -- defines the storage file DBMS_DATAPUMp.add_file (handle => h1, filename => 'es _ shop. dmp '); -- Define the filter condition DBMS_DATAPUMP.metadata_filter (handle => h1, name => 'schema _ expr', value => 'in' SHOP _ user '''); -- start the Data Pump session DBMS_DATAPUMP.start_job (handle => h1); -- disconnect the Data Pump session DBMS_DATAPUMP.detach (handle => h1); end; -- default save path: C: \ Oracle11g \ admin \ orcl \ dpdump
V. Import
(1) Data Pump import Mode
1. Export Method
1) full import mode: if all the content of an exported file set is loaded, the exported file set is not necessarily exported in full database mode.
2) User Mode: All contents in a user list in a specified file set are imported.
3. Table mode: Specify tables and dependent objects to be imported from the export file.
4. tablespace mode: All content in the tablespace list in the specified file set is imported.
2. Use command line to import
Syntax:
C: \ impdp system/password directory = pbdir dumpfile = pb. dmp full = y tables = table_list tablespaces = tablesapce_list schemas = schema_list remap_schema = user1: user2 nologfile = y sqlfile = pb. SQL
System/passwrod: user name and password
Directory: Database directory object
Dumpfile: Specifies the dump file
Full = y: indicates all imports performed.
Tables = table_list: indicates the list of imported tables.
Schemas = schema_list: indicates the imported user mode.
Tablespaces = tablesapce_list: The imported tablespace.
Remap_schema = user1: user2: import from user1 to user2
Nologfile = y: indicates no log operation.
Sqlfile: Write the metadata (DDL statement) to the specified file.
Example:
1) import the entire database:
ImpdpScott/Scott@ Accp directory = dump_dir dumpfile = FULL. DMP full = y
2) Import tablespace:
ImpdpScott/Scott@ Accp directory = dump_dir dumpfile = tablespace. DMP tablespaces = mytest
3) import all tables under the scott User:
ImpdpScott/Scott@ Accp directory = dump_dir dumpfile = TABLES. DMP tables = emp, dept, bonus, salgrade
4) import the dept and emp tables under the exported scott user to the mytest user.
Impdp scott/scott @ accp directory = dump_dir dumpfile = SCHEMA. DMP tables = dept, emp remap_schema = scott: mytest
3,DBMS_DATAPUMP
Declare -- create a data pump handle h1 NUMBER; begin -- create a user-defined data pump to access through the data pump connection orcllib. H1: = DBMS_DATAPUMP.open (operation => 'import', job_mode => 'scheme', remote_link => 'orcllib '); -- IMPORT shop_dev_data object mode to shop_back object mode. DBMS_DATAPUMP.metadata_remap (handle => h1, name => 'remap _ scheme', old_value => 'shop _ DEV_DATA ', value => 'shop _ back '); -- write logs to shop. DBMS_DATAPUMP.add_file (handle => h1, filename => 'shop. log', filetype => DBMS_DATAPUMP.KU $ _ FILE_TYPE_LOG_FILE); -- start the Data Pump restart (handle => h1); -- disconnect the Data Pump from DBMS_DATAPUMP.detach (handle => h1); end;