Summary of database backup and Restoration Technology of the IT Ninja Turtles, and backup of the Ninja Turtles Database
1. exp/imp (export and import the library and unload the database) 1.1 Basic commands 1. Get help
$ Exp help = y
$ Imp help = y
2. Three ways of working
(1) Interactive Mode
$ Exp // enter the required parameters as prompted
(2) Command Line
$ Exp user/pwd @ dbname file =/oracle/test. dmp full = y // enter the required parameters in the command line.
(3) parameter file Method
$ Exp parfile = username. par // enter the required parameters in the parameter file
Parameter file username. par content
Userid = username/userpassword
Buffer= 8192000
Compress = n
Grants = y
File =/oracle/test. dmp
Full = y
3. Three Modes
(1) Export/import data from a specified table.
Export:
Export one or more tables:
$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log tables = table1, table2
Export some data of a table
$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log tables = table1query = \ "where col1 = \'... \'
And col2 \ <... \"
Import:
Import one or more tables
$ Imp user/pwd file =/dir/xxx. dmp log = xxx. log tables = table1, table2fromuser = dbuser
User = dbuser2 commit = y ignore = y
(2) Export/Import all objects and data of a specified user in user mode.
Export:
$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log owner = (xx, yy)
Export only data objects, not data (rows = n)
$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log owner = user rows = n
Import:
$ Imp user/pwd file =/dir/xxx. dmp log = xxx. log fromuser = dbusertouser = dbuser2
Commit = y ignore = y
(3) export/import all objects in the database in full database mode
Export:
$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log full = y commit = yignore = y
Import:
$ Imp user/pwd file =/dir/xxx. dmp log = xxx. log fromuser = dbusertouser = dbuser2
1.2 advanced Option 1. Split into multiple files
Export using multiple fixed-size files: This method is usually used when the table data volume is large, and a single dump file may
The file system limit is exceeded.
$ Exp user/pwd file = 1.dmp, 2.dmp, 3.dmp ,... Filesize = 1000 m log = xxx. logfull = y
Import multiple fixed-size files
$ Imp user/pwd file = 1.dmp, 2.dmp, 3.dmp ,... Filesize = 1000 m tables = xxxfromuser = dbuser
Touser = dbuser2 commit = y ignore = y
2. incremental Export/Import
// After oracle 9i, exp no longer supports inctype
You must use SYS or SYSTEM to perform incremental export and import.
Incremental export: includes three types:
(1) "Complete" incremental Export (Complete) // back up the entire database
$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log inctype = complete
(2) incremental export and export the data changed after the last backup.
$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log inctype = incremental
(3) Cumulative exports only the changes in the database after the last full export
.
$ Exp user/pwd file =/dir/xxx. dmp log = xxx. log inctype = cumulative
Incremental import:
$ Imp usr/pwd FULL = y inctype = system/restore/inctype
Where:
SYSTEM: import SYSTEM objects
RESTORE: import all user objects
3. Export/Import with SYSDBA
1. for Oracle Technical Support
2. Used for tablespace Transmission
Example:
$ Imp \ 'usr/pwd @ instance as sysdba \ 'tablespaces = xxtransport_tablespace = y
File = xxx. dmp datafiles = xxx. dbf
$ Imp file = expdat. dmp userid = "sys/password as sysdba" transport_tablespace = y
"Datafile = (c: tempapp_data, c: tempapp_index )"
4. tablespace transmission (FAST)
Table space transfer is a newly added 8 I Method to quickly move data between databases.
Instead of exporting the data to a dmp file
In some cases, it is very useful because it transfers tablespace movement data as quickly as copying files.
1. There are some rules for transferring tablespaces (before 10 Gb ):
(1) The source database and target database must run on the same hardware platform.
(2) The source database and the target database must use the same character set.
(3) The source database and target database must have data blocks of the same size.
(4) The target database cannot have a tablespace with the same name as the migrated tablespace.
(5) SYS objects cannot be migrated
(6) The contained object set must be transmitted.
(7) Some objects, such as materialized views and function-based indexes, cannot be transmitted.
(You can change the file header of a data file across platforms in the same byte sequence)
(10 Gb supports cross-platform tablespace transmission. As long as the operating system has the same byte sequence, the tablespace transmission can be performed. The file format needs to be converted using RMAN, omitted)
2. check whether a tablespace meets the transmission standard:
SQL> exec sys. dbms_tts.transport_set_check ('tablespace _ name', true );
SQL> select * from sys. transport_set_violations;
If no row is selected, the tablespace only contains table data and is self-contained. For some non-self-contained packages
Table space, such as table space and index table space, can be transmitted together.
3. Procedure:
For more information, see ORACLE online help.
1. Set the tablespace to read-only (assuming the tablespace name is APP_Data and APP_Index)
SQL> alter tablespace app_data read only;
SQL> alter tablespace app_index read only;
2. Issue the EXP command
SQL> host exp userid = "sys/password as sysdba" transport_tablespace = y
Tablespaces = (app_data, app_index)
Note that
· To execute EXP in SQL, USERID must be enclosed in three quotation marks, which must also be noted in UNIX.
Avoid "/" Usage
· After 816 and later, you must use sysdba to operate
· This command must be placed in one row in SQL (this is because the display problem is placed in two rows)
3. Copy the. dbf data file (and. dmp file) to another location, that is, the target database.
It can be cp (unix), copy (windows), or transfer files through ftp (it must be in bin Mode)
4. Set the local tablespace to read/write
$ Alter tablespace app_data read write;
$ Alter tablespace app_index read write;
5. Attach the data file to the target database (directly specify the data file name)
(The tablespace cannot exist. You must create a user name or use fromuser/touser)
$ Imp file = expdat. dmp userid = "" sys/password as sysdba """
Transport_tablespace = ydatafiles = ("c: \ app_data.dbf, c: \ app_index.dbf ")
Tablespaces = app_data, app_index tts_owners = hr, oe
6. Set the tablespace of the target database to read/write.
$ Alter tablespace app_data read write;
$ Alter tablespace app_index read write;
1.3 optimization 1. Speed up exp
Increase large_pool_size to increase the exp speed.
Using direct path (direct = y), data does not need to be integrated and checked through memory.
Set a large buffer. If a large object is exported, the small buffer will fail.
The export file is not on the drive used by ORACLE.
Do not export data to the NFS File System
UNIX environment: import and export data in MPs queue mode to Improve the Performance of imp/exp.
2. Speed up imp
Create an indexfile. After data import is complete, create an index.
Place the import file on different drives
Add DB_BLOCK_BUFFERS
Add LOG_BUFFER
Run ORACLE: alter database noarchivelog in non-archive mode;
Create a large tablespace and rollback segment. For other OFFLINE rollback segments, the size of the rollback segment is 1/2 of the maximum table size.
Use COMMIT = N
Use ANALYZE = N
Single-user mode Import
UNIX environment: import and export data in MPs queue mode to Improve the Performance of imp/exp.
3. Speed up exp/imp using unix/Linux PIPE Pipelines
Export data through pipelines:
1. Create a pipeline through mknod-p
$ Mknod/home/exppipe p // create an MTS queue under the directory/home. Note that the parameter p
2. export data to the created MPs queue and compress the data through exp and gzip.
$ Exp test/test file =/home/exppipe & gzip
$ Exp test/test tables = bitmap file =/home/newsys/test. pipe &
Gzip
3. After the export is completed, delete the created MPs queue.
$ Rm-rf/home/exppipe
Export script:
### Back up ORACLE databases in UNIX using the PIPE Pipeline
###### Using "export" and "tar" command to bakup oracle datebase #######
Trap "" 1 # nohup
LOGFILE =/opt/bakup/log/bakup_ora.log
Export LOGFILE
DUMPDIR =/archlog_node1
Export DUMPDIR
Exec> $ LOGFILE 2> & 1
Echo
Echo 'in in at ''date'
Echo
# Clear old result file
Cd $ DUMPDIR
If [-f exp. dmp. Z]
Then
Echo "clear old result file"
Rm exp. dmp. Z
Fi
# Make pipe
Mkfifo exp. pipe
Chmod a + rw exp. pipe
# Gain the dmp. Z file
Compress <exp. pipe> exp. dmp. Z &
Su-u oracle-c "exp userid = ll/llfile = $ DUMPDIR/exp. pipe full = y buffer = 20000000"
Echo
Echo 'exp end at ''date'
Echo
# Rm pipe
Rm exp. pipe
# Tar the dmp. Z file to tape
Mt-f/dev/rmt/0 rew
Tar cvf/dev/rmt/0 exp. dmp. Z
Echo
Echo 'tar end at ''date'
Echo
Import the generated file through the pipeline:
1. Create a pipeline through mknod-p
$ Mknod/home/exppipe p
2. Import the generated compressed file
$ Imp test/test file =/home/exppipefromuser = test touser = macro &
Gunzip <exp.dmp.gz>/home/exppipe
3. delete an MPS queue
$ Rm-fr/home/exppipe
4. General steps for full-Database Import
Note: During export, you must use toad or other tools to extract scripts for creating primary keys and indexes in the source database.
1. Add rows = n to the full database to import the structure
$ Imp system/manager file = exp. dmp log = imp. logfull = y rows = n indexes = n
2. Make the trigger of the Business User invalid/Delete the primary key and unique index
Spool drop_pk_u. SQL
Select 'alter table' | table_name | 'dropstraint' | constraint_name | ';'
From user_constraints
Where constraint_type in ('P', 'U ');
/
Spool off
Spool disable_trigger. SQL
Select 'alter trigger' | trigger_name | 'Disable ;'
From user_triggers;
/
Spool off
@ Drop_pk_u. SQL
@ Disable_trigger. SQL
3. Import the entire library with ignore = y
$ Imp system/manager file = exp. dmplog = imp. log full = y ignore = y
4. Use toad or other tools to extract scripts for creating primary keys and indexes in the source database, and create primary keys in the target database.
And index. Make the trigger take effect.
1.4 FAQs
1. Character Set Problems
ORACLE multi-language settings are used to support languages and character sets worldwide,
The currency format, sorting method, and display of CHAR, VARCHAR2, CLOB, and LONG fields are valid.
The two main features of ORACLE's multi-language settings are the National Language settings and Character Set settings.
Language setting determines the type of language used by the interface or prompt. the character set determines that the database stores data related to the character set.
(Such as text) encoding rules.
ORACLE Character Set settings include database character set and client Character Set environment settings. On the database end,
The character set is set when the database is created and saved in the database props $ table.
The character set environment on the client is relatively simple, mainly environment variables or registry entry NLS_LANG. Note:
The priority level of NLS_LANG is: parameter file <registry <environment variable <alter session. If the client
The character set is different from the server-side character set, and the character set conversion is not compatible, so the client data display
Data related to the exported/imported character sets will be garbled.
With a few tips, you can export/import data to a database with different character sets for conversion. Here
You need a binary file editing tool, such as uedit32. Open the exported dmp file in editing mode and obtain
Take 2 and 3 bytes of content, such as 00 01, first convert it to a 10-digit number, which is 1, using the function
NLS_CHARSET_NAME:
SQL> select nls_charset_name (1) fromdual;
NLS_CHARSET_NAME (1)
-------------------
US7ASCII
The dmp file's character set is US7ASCII.
To ZHS16GBK, use NLS_CHARSET_ID to obtain the number of the character set:
SQL> select nls_charset_id ('zhs16gbk') from dual;
NLS_CHARSET_ID ('zhs16gbk ')
--------------------------
852
Replace 852 with the hexadecimal number, which is 354. Replace 00 01 of 2 and 3 bytes with 03 54.
The dmp file character set is converted from us7ascii to zhs16gbk. In this way, the dmp file is imported to zhs16gbk.
Character Set database.
2. Version Problems
Exp/Imp can be used across versions. For example, you can export and import data between versions 7 and 8,
To do this, you must select the correct version. The rule is:
· The IMP version is always used to match the database version. If you want to import data to analyticdb 816, You can import data from analyticdb 816 to analyticdb.
.
· The EXP version is always used to match the medium and low versions of the two databases, for example, mutual import between 815 and 816,
Use the 815 EXP tool.
Imp and exp versions are not compatible: imp can import files generated by exp of lower versions, but cannot import files of higher versions.
File generated by version exp
How to archive the Backup Recovery of Ninja Turtles
Open ninja turtles and you will find a "play" icon on the interface. Click it to display the "" and "ARCHIVE" icons. After you click archive, there will be three categories: "Popular Archive", "my backups", and "my downloads". Then, click my backup, at this time, we need to log on to our thumb play account. After logging on, we can see that we have backed up the archive. After downloading it and restoring it to the game, we can continue playing the last game archive. Ninja Turtles official introduction: a classic pass action game. Players need to jump, jump, squat, climb the wall, suspension the iron chain, push the box, and control the authorities to pass the customs. If you defeat an enemy in a battle, you can gain an angry value. When the battle is full, you can release the Super Killer. At the same time, it can combine multiple attack methods, special skills, and even super-essential techniques, with great power. Game archive version: 2.0.5 release: 2013-07-01 size: 2.84M category: Essential Software language: Chinese Applicability: 1.0 download: 516186 developers: thumb play now view Ninja Turtles version: 1.0 Release: 2012-01-30 size: 0.8M type: Action Shooting language: English applicable: 1.0 download: 2033 developer: John check the original post now>
Details of Hot Backup Technology !!!
1. Background of enterprise database hot backup solution
With the rapid development of the electronic process and the wide application of information technology, data has become an indispensable part of the daily operations of enterprises and public institutions and the basis for decision-making by leaders. However, the use of computers sometimes brings us troubles, that is, computer data is very easy to lose and be damaged. According to research from professional institutions, the loss of 0.13 million MB of data means RMB 0.16 million for the marketing department, RMB 0.8 million for the Finance Department, and RMB for the engineering department. However, if the lost key data cannot be recovered within 15 days, the enterprise may be eliminated. As computer systems become increasingly indispensable data carriers for enterprises, how to use data backup to ensure data security has become an urgent topic for us to study.
Data is damaged, which may be caused by artificial factors or unpredictable factors, including the following:
(1) computer hardware faults. A computer is a machine, and its hardware is the basis of the entire system. Due to improper use, poor computer product quality, and aging of accessories, computer hardware may be damaged and cannot be used. For example, the hard disk's track is damaged.
(2) the computer software system is unstable. Computer software systems may be paralyzed and unavailable due to improper use by users or unstable system reliability.
(3) misoperation. This is a man-made accident and cannot be completely avoided. For example, useful data is accidentally deleted when a DELETE statement is used.
(4) destructive viruses. Viruses are a very important reason for the possible damage to the system. With the development of information technology, various viruses also flood. Nowadays, viruses can not only damage software systems, but also damage computer hardware systems. For example, the popular CIH virus published on the 26th of every month is a typical virus that destroys computer hardware systems.
(5) natural disasters, such as fires, floods, and earthquakes. This is an almost irresistible reason.
Some people may not imagine that a small machine room fire could collapse all the information systems of a multinational enterprise. Procurement data from dozens of production centers around the world, orders from tens of thousands of suppliers and distributors, billions of yuan of inventory information, and annual work plans of tens of thousands of employees, in a minute, it is possible. Therefore, in the past, data security systems used by financial data-intensive enterprises such as banks and telecom carriers are increasingly becoming preventive measures that enterprises have to take in advance.
At present, the world is paying more and more attention to computer security technology, and people's security awareness is becoming more and more intense. It is reflected in the application of computer security technology from individual special industries to various industries; this is an irreversible trend from focusing on computer equipment to the security of core data. The computer applications of enterprises have been very extensive, and they have gone deep into all aspects of enterprise management, data of subsystems such as general manager query system, financial management, personnel management, and stock-sales logistics management are stored in the database of the server, which has high real-time requirements, server backup and real-time database backup are especially necessary and urgent. There are many hardware backup solutions for servers, and they are already mature. There are several hot backup software for databases abroad, which is just getting started in China. In this context, this is the hot backup software for the "yongsi hot backup" database, after more than a year of development and testing, it was successfully introduced to the market. This software fills in the gaps of domestic products and has incomparable advantages over similar products in foreign countries. On this basis, combined with the business characteristics of the enterprise and the data processing characteristics of the enterprise management system, yongsi Technology launched the enterprise database hot backup solution, and quickly entered the application, has withstood the test of practice.
2. General Business Processes of Enterprises
From this, we can see that the business process of an enterprise is very complex and involves many links. Therefore, the database of the enterprise management system changes frequently.
3. Data Processing of Enterprise Management Systems
The current financial management system basically adopts the frontend and backend processing mode. Background databases generally use MS-SQL, ORACLE, and SYBASE. Personnel management, financial management, logistics management, and other data are stored in the background database, and various business processes add and modify the background database. Because of the special nature of enterprise data, once a problem occurs in the core database, it will cause immeasurable losses to the enterprise. So, how to... the remaining full text>