Import and Export of DB2 data (Export) (Load)

Source: Internet
Author: User

The so-called data movement in DB2 includes:
1. Import Data)
2. Data Export (Export)
3. Load)

Both import and load use DB2 commands to save data in files of a certain format to tables in the database.
Export refers to saving the data in the table of the DB2 database to a file of a certain format.

The role of data movement:

To transfer data between different database management systems, data movement is usually the most practical method, because any database management system supports common file formats, this universal interface makes it easy to transfer data between different systems.

Among the three commands, Export is the simplest. Because data is transferred from a table to a file, there is usually no error or illegal data.

1. ASC -- a non-bounded ASCII file, which is an ASCII plain stream. The rows in the data stream are separated by line delimiters, and each column in the row is defined by the start and end positions. For example:

10 Head Office 160 initialize ate New York
15 New England 50 Eastern Boston
20 Mid Atlantic 10 Eastern Washington
38 South Atlantic 30 Eastern Atlantic
42 Great Lakes 100 Midwest Chicago
51 Plains 140 Midwest Dallas
66 Pacific 270 Western San Francisco
84 Mountain 290 Western Denver

2. DEL -- specifies an ASCII file, which is also an ASCII transfer stream. The rows in the data stream are separated by line delimiters, and the column values in the rows are separated by column delimiters. File Type modifiers can be used to modify the default values of these delimiters. For example:

10, "Head Office", 160, "initialize ate", "New York"
15, "New England", 50, "Eastern", "Boston"
20, "Mid Atlantic", 10, "Eastern", "Washington"
38, "South Atlantic", 30, "Eastern", "Atlantic"
42, "Great Lakes", 100, "Midwest", "Chicago"
51, "Plains", 140, "Midwest", "Dallas"
66, "Pacific", 270, "Western", "San Francisco"
84, "Mountain", 290, "Western", "Denver"

3. WSF -- (work sheet format) is a worksheet format used for data exchange with the Lotus series software.

4. PC/IXF -- an adapted version of the integrated Exchange Format (IXF) data Exchange architecture, which consists of records with Variable Length of some columns, it includes the header record, Table Record, column descriptor record of each column in the table, and one or more data records of each row in the table. A pc/IXF file record consists of fields that contain character data.

 


Part 1: Data Export (Export)
Example 1: export all data in the Org table to the file C: \ ORG. TXT.
Export to c: \ org.txt of del select * from org

Here, of del indicates the type of the exported file. In this example, it is exported to a non-bounded text file. The select * from org following is an SQL statement, the result of this statement is the data to be exported.

Example 2: Change the control operator of the del Format File
Export to c: \ staff.txt of del modified by coldel $ chardel ''decplusblank select * from staff
In this example, the modified clause is used to control various symbols. coldel indicates the delimiter between fields. By default, it is a comma. Now it is changed to "$". chardel indicates the symbol used to reference string fields, by default, it is enclosed by a pair of double quotes. Now it is enclosed by a pair of single quotes. decplusblank indicates that the front plus sign is replaced by space for the decimal data type, by default, digits plus or minus signs are added before the decimal data.

Example 3: export data to a file in ASC format
The Export command does not support files in ASC format. to Export such a regular format as ASC, the programmer needs to convert all data types into fixed-length strings, then combine the fields to be exported into one field.
For example, create table n with the following structure:
Create table n (a int, B date, c time, d varchar (5), e char (4), f double)
Insert two pieces of data:
Insert into n values (15, '1970-10-21 ', '23: 12: 23', 'abc', 'hh', 2004)
Insert into n values (5, '2017-1-21 ', '3: 12: 23', 'bc', 'hhh', 2004)
To export the two data to a file in a regular format, perform the following operations:
Export to c: \ test.txt of del select char (a) | char (B) | char (c) | char (d, 5) | e | char (f) as tmp from n
The exported results are very similar to files in ASC format, but there is a pair of double quotation marks before and after each row. For this reason, we can use text tools (such as WordPad and notepad) to delete double quotation marks, you can also ignore it and directly control the format (ignore double quotation marks) during future import)
The format in the file is:

"15 2004-10-2123.12.23abc hh 3.52E1"
"5 2004-01-2103.12.23bc hhh 3.5672E1"

Example 4: Export Big Data
Export to d: \ myfile. del of del lobs to d: \ lob \ lobfile lobs modified by lobsinfile select * from emp_photo
This command exports data from the emp_photo table to the d: \ myfile. del file. The result is:
<Pre>;
"000130", "bitmap", "lobs.001.0.43690 /"
"000130", "gif", "lobs.001.43790.29540 /"
"000130", "xwd", "lobs.001.73230.45800 /"
"000140", "bitmap", "lobs.001.119030.71798 /"
"000140", "gif", "lobs.001.190828.29143 /"
"000140", "xwd", "lobs.001.219971.73908 /"
"000150", "bitmap", "lobs.001.293879.73438 /"
"000150", "gif", "lobs.001.367317.000095 /"
"000150", "xwd", "lobs.001.407112.75547 /"
"000190", "bitmap", "lobs.001.482659.63542 /"
"000190", "gif", "lobs.001.546201.36088 /"
"000190", "xwd", "lobs.001.582289.65650 /"
</Pre>;
The third field is of the BLOB type. Only one flag is saved in the file, which is equivalent to a pointer. The real LOB data is stored in d: \ lobs.001, lobs.002 ,...... and other files. In the command, specify the path in which the big object data is stored after lobs to (note that this path must already exist in advance; otherwise, an error is reported ), after the lobfile, specify the files in which the big object data is stored. do not specify the extension. DB2 automatically appends data based on the data volume. 001 ,. 002 extension. Do not forget to add the modified by lobsinfile clause.


Example 5: Save the export information in the message file.
Export to d: \ awards. ixf of ixf messages d: \ msgs.txt select * from staff where dept = 20
In this example, dept = 20 Data in the staff table is exported to d: \ awards. in the ixf file, all exported information is stored in the d: \ msgs.txt file (whether it is a success, warning, or failure message). In this way, the administrator can find the problem by observing the information file.

Example 6: Rename the exported data column.
Export to d: \ awards. ixf of ixf method n (c1, c2, c3, c4, c5, c6, c7) messages d: \ msgs.txt select * from staff where dept = 20
By default, the exported data in each column is automatically named by the corresponding field name in the table. We can use the method n clause to rename each column. Note that, this clause is only valid in ixf and wsf format files and cannot be used in text files. Data Import

Example 7: import the data in the org.txt file under the C-root directory to the org table
Import from c: \ org.txt of del insert into org

The format of the import and export commands is basically in the corresponding relationship, the import corresponds to export, from corresponds to, the file name and file format represent the same meaning, but the import command Supports files in ASC format, the Export command is not supported. In addition, an SQL statement is used at the end of the Export command to select the data to be exported. The import command is not an SQL statement at last, but the method of inserting data and the name of the target table.

Example 8: import data from an ASC File
Import from c: \ org2.txt of asc method l (,) insert into org
The method l clause is used to specify the start position and end position of each field in the text file. Each start position and end position are separated by spaces, and fields are separated by commas.
In addition to the l method, there are n and p methods, which are described below.

Example 9: Use the n method to import data and create a new table.
First, export a use case file:
Export to d: \ org. ixf of ixf method n (a, B, c, d, e) select * from org
In this way, the org. ixf file contains five columns of data. The corresponding column names are a, B, c, d, and e.
Then, import data from the file to a new table.
Import from d: \ org. ixf of ixf method n (d, e, B) replace_create into orgtest
This command selects three columns from the file to import them to the table. The order can be different from the original column order in the file. For details about the replace_create method, see.

Insert methods:
INSERT method: append new data based on existing data in the table.
INSERT_UPDATE Method -- this method can only be used for tables with primary keys. If the inserted data does not conflict with the original data primary key, it is directly inserted. If the primary key conflicts, new data is used instead of the original data.
REPLACE Method -- first delete all existing data in the table, and then insert data to the empty table.
REPLACE_CREATE mode -- indicates that if a table exists, the data in the table is deleted first, and then the data is inserted into the empty table. If the table does not exist, the table is created based on the fields in the file, then insert data into the table. In this way, only data in files in IXF format can be inserted into tables.

Example 10: Use the p method to import data
Import from d: \ org. ixf of ixf method p (4, 5, 2) replace into orgtest
The execution result in this example is similar to that in Example 9. You only need to replace the n method with the p method, and specify the sequence number in the list after the p method. No column name is required. In addition, this example uses the replace method to insert data. This will delete all existing data in the table and then insert data to the empty table.

Example 11: Import null values
For files in the ixf format, It is very convenient to import null values because the null values have been recorded. However, it is difficult for ASC files because DB2 inserts spaces instead of null values. Therefore, DB2 provides a sub-statement for control: NULL INDICATORS

Import from c: \ org2.txt of asc modified by nullindchar = # method l (1 5, 6 19,20 25, 26 37,38 50) null indicators (0, 0, 0, 38) replace into org

In this example, the null indicators clause is followed by a list, indicating that no NULL value exists in the first four fields, and the fifth field may be NULL starting from column 38, the modified by nullinchar = # clause indicates that if the fifth field in the file encounters the # sign, the value is null.

Load)

The syntax of the Load command is similar to that of the import command. The keyword of the command is Load, but the following parameters are more than the import command. For detailed usage, refer to the DB2 document.

Similar to importing, data in the input file is moved into the target table. The differences between the two will be explained gradually in the instance.

The target table must exist before loading.

The load performance is higher than the import performance. The reason is detailed in the instance.

Loading operations are not recorded in logs, so you cannot use log files for rollback operations.

Loading is divided into four stages:
1. loading phase
At this stage, data is stored in tables, index keys are collected, and sorted. At the time of loading, DBA can specify how long to generate a consistent point.

It is the checkpoint of the Mount tool. If the load is interrupted during execution, it can continue to be re-executed from the last consistent point.

2. Build phase
During the build phase, an index is created based on the index key information collected during the load phase. If an error occurs during the build phase, load the tool and restart it from the beginning of the build phase.

3. During the deletion phase, all rows that violate the unique or primary key constraints are deleted and copied to an exception table (if the corresponding options are specified in the statement. When the input line is rejected, a message is generated in the message file.

4. Index copy phase
If the system temporary tablespace is specified for index creation during the loading operation and the read access option is selected, the index data will be copied from the system temporary tablespace to the original tablespace.

All four stages of the loading process are part of the operation. The loading operation is complete only after all four stages are completed. Messages are generated at each stage. Once an error occurs at a stage, these messages can help the DBA analyze and solve the problem.

Each time a row of data is inserted in the import operation, check whether the constraints are met and the data is recorded in the log file.

Next, let's take a look at some of the functions specific to the LOAD command. What the IMPORT command can do is not described in detail.

Example 12: load from the cursor type file
Define a cursor
Declare mycur cursor for select * from org
Create a new table with the structure compatible with cursor
Create table org2 like org
Load from cursor
Load from mycur of cursor insert into org2

In addition to loading data from cursor, you can also load data from files, pipelines, and devices. The import command can only be imported from files.

 

Example 13: exception table
User-defined exception tables can be used to store rows that do not comply with unique and primary code constraints. If no exception table is specified during loading, the row that violates the unique constraint is discarded and no longer has the opportunity to restore or modify the table.
Experiment with the STAFF table in the SAMPLE database
1. Create a STAFF1 table with the same structure as the STAFF table
Create table STAFF1 LIKE STAFF

2. Insert Part of the data in the STAFF table to STAFF1.
Insert into STAFF1 SELECT * from staff where id <= 160

3. Create another STAFFEXP table with the same structure as STAFF1 as the exception table.
Create table staffexp like STAFF1

4. add a column to the exception table because the structure of the exception table is the same as that of the normal table, that is, one or two more columns (the column name is arbitrary ), the first column is the timestamp type, recording the insertion time of the exception record, and the second column is the large text type (at least 32 KB). It stores the specific constraint information that causes the record to be denied. In this example, only one timestamp column is added.
ALTER TABLE STAFFEXP ADD COLUMN TIME TIMESTAMP

5. Create a unique index for the STAFF1 table
Create unique index idxstaff on STAFF1 (ID)

6. Run the Export command to create a text file.
Export to d: \ STAFF. txt of del select * FROM STAFF

7. Run the load command to load the data to the STAFF1 table.
Load from d: \ STAFF. txt of del insert into STAFF1 FOR EXCEPTION STAFFEXP

Because STAFF1 has a unique index, some data cannot be inserted into the STAFF1 table because it violates this constraint. These records are inserted into the STAFFEXP table.

Note that the exception table must be defined first. The exception table cannot be automatically generated by the load command. If the specified exception table cannot be found, an error is returned.

 

Example 14: DUMP files
Rows with incorrect format are rejected. By specifying DUMPFILE file type modifiers, these rejected records can be separately stored in the specified file.
Experiment with the STAFF table in the SAMPLE database
1. Create a STAFF1 table with the same structure as the STAFF table
Create table STAFF1 LIKE STAFF

2. Insert Part of the data in the STAFF table to STAFF1.
Insert into STAFF1 SELECT * from staff where id <= 160

3. Create another STAFFEXP table with the same structure as STAFF1 as the exception table.
Create table staffexp like STAFF1

4. Add a column to the exception table
ALTER TABLE STAFFEXP ADD COLUMN TIME TIMESTAMP

5. Create a unique index for the STAFF1 table
Create unique index idxstaff on STAFF1 (ID)

6. Run the Export command to create a text file.
Export to d: \ STAFF. txt of del select * FROM STAFF
Open the STAFF. TXT file on drive D and replace the row with the first column equal to 320 with: "abcf", "aaa", "sdfg"

7. Run the load command to load the data to the STAFF1 table.
Load from d: \ STAFF. txt of del modified by dumpfile = d: \ dump insert into STAFF1 FOR EXCEPTION STAFFEXP

The loaded results report contains the following information:
SQL3118W Field Values in row "32" column "1" cannot be converted to SMALLINT values, but the target Column cannot be empty. This row is not installed.
SQL3185W a previous error occurred when processing data in Row 32 of the input file.

Open the dump.000 file on drive D and you will see the line of data that caused the exception: "abcf", "aaa", "sdfg"

Through this example, we can understand that if a row of data is incorrectly formatted, it will be rejected during loading, and the row of records will be placed in the DUMP file. If the data format is correct, however, if the table's constraints are not met, the row record will be placed in the exception table.

 

Example 15: Limit the number of loaded rows
You can use the ROWCOUNT option to specify the number of records loaded from the beginning of the file.
Load from d: \ STAFF. txt of del rowcount 3 insert into STAFF1

 

Example 16: An error occurred while forcing the loading operation.
In some cases, all the data in the file must be successfully entered into the target table to be successful, even if an error occurs in a record. In this case, you can use the WARNINGCOUNT option.

Open the STAFF. TXT file on drive D and replace the row with the first column equal to 320 with: "abcf", "aaa", "sdfg"

Load from d: \ STAFF. txt of del warningcount 1 insert into STAFF1

The running result contains the following warning:
SQL3118W Field Values in row "32" column "1" cannot be converted to SMALLINT values, but the target Column cannot be empty. This row is not installed.
SQL3185W a previous error occurred when processing data in Row 32 of the input file.
The SQL3502N utility encounters a "1" warning, which exceeds the maximum number of warnings allowed.

In this case, the table STAFF1 cannot be operated, for example
SELECT * FROM STAFF1
Will return:
ID NAME DEPT JOB YEARS SALARY COMM
--------------------------------------------------
SQL0668N operations are not allowed due to the code "3" on the table "USER. STAFF1.
SQLSTATE = 57016

The reason is that the table is in the "load pending" status. Previous LOAD attempts to this table failed. Table access is not allowed until the LOAD operation is restarted or terminated.

Solution: re-start or terminate the previously failed LOAD operation on the table by issuing the LOAD with the RESTART or TERMINATER options respectively.

The LOAD command containing TERMINATER can terminate the loading process and restore the target table to normal and available status:
Load from d: \ STAFF. txt of del terminate into STAFF1

The LOAD command containing RESTART can be used when the source file is modified correctly to RESTART the loading process:
Load from d: \ STAFF. txt of del restart into STAFF1

 

Example 17: Prevent warnings
You can use the NOROWWARNINGS file type modifier to disable the generation of warning information. This option can be used when a large amount of warning information may appear during the loading process and users are not interested in this information, this greatly improves the loading efficiency.

Open the STAFF. TXT file on drive D and replace the row with the first column equal to 320 with: "abcf", "aaa", "sdfg"

Load from d: \ STAFF. txt of del modified by norowwarnings insert into STAFF1

In the result of running, 32nd rows failed. The row cannot be loaded, but no warning is generated.

 

Example 18: generate statistical data
The STATISTICS option can be used to generate STATISTICS during the loading process. These STATISTICS can be used by the optimizer to determine the most effective way to execute SQL statements.
You can generate statistical data of different levels for tables and indexes:

① Produce the most detailed statistics on tables and indexes:
Load from d: \ STAFF. txt of del replace into STAFF1 STATISTICS YES WITH DISTRIBUTION AND DETAILED INDEXES ALL

② Generate simple statistics on tables and indexes:
Load from d: \ STAFF. txt of del replace into STAFF1 STATISTICS YES AND INDEXES ALL

For other combinations, see the DB2 documentation.

Note: The STATISTICS option can only be compatible with REPLACE, but not the INSERT option.

In addition, the STATISTICS are completed using the STATISTICS option, so we cannot see any direct results. If you want to view the results, you need to query them in the system table.

 

Example 19: Release check suspension status
1. Connect to the SAMPLE database:
Connect to sample

2. Create a table with the same structure as the staff table:
Create table STAFF1 LIKE STAFF

3. Add a check constraint to the table:
Alter table staff1 add constraint chk check (dept <100)

4. Open the STAFF. TXT file on drive D, and change the third column of the last row of data to 150. In this way, the data does not meet the check conditions added in step 1.

Then run the Load command to Load data from the file to the staff1 table:
Load from d: \ STAFF. txt of del insert into STAFF1

5. Run the query command:
Select * from staff1
The following error message is displayed:
SQL0668N the operation is not allowed because the code "1" on the table "USER. STAFF1" is "1.
SQLSTATE = 57016
The reason is that when loading data, the check constraints are violated, and the table is in the check and suspension status.

6. Check the suspension status of the table. Use:
Set integrity for staff1 check immediate unchecked
Run the query command again:
Select * from staff1
The table can be used normally, and data that violates the check rules also exists.

Example 20: performance factors
When importing data from a file to a table, when the data volume is very large, the load command obviously shows an advantage, because it is not like the import command to insert a row each time, in addition, check whether the constraints are met on each line. Load the command to read the data building page from the input file and write these pages directly to the database, when loading each row of data, the system does not determine whether the constraints are met. In addition, the loading command does not write logs. All these factors cause the loading efficiency to be higher than the import efficiency.

In addition, the load command has some options to control performance factors:
1. copy yes/NO and Nonrecoverable
① Nonrecoverable: the specified Mount operation cannot be recovered and cannot be restored by subsequent rollback operations. The rollback operation ignores the transaction and marks the table that is being loaded as "invalid ".

② Copy No (default option): In this case, if the archiving log of the database in which the table is located is enabled, the tablespace in which the table is located will be in the backup suspended state after loading is complete, the tablespace is writable only after the database or tablespace backup is complete. The reason is that the changes caused by the loading operation are not recorded, so it is necessary to back up the database or table space to recover the fault after the loading operation is completed.

③ Copy Yes: in this case, if the archive log of the database is enabled, the changes to the load operation will be saved to the tape, directory, or TSM server, and the tablespace will no longer be in the backup pending state.

2. Fastparse
This file type modifier is used to reduce the number of data checks. It can be used only when data is known to be correct, especially for files of the DEL and ASC types.

3. Anyorder
If the SAVECOUNT option is not used, this parameter allows loading without following the data sequence in the input file. When the CPU_PARALLELISM option is greater than 1 on the SMP (symmetric multiprocessor) system, this parameter improves the loading performance.

4. Data Buffer
This parameter is used to specify the number of 4 K memory pages allocated from the stack. As the internal buffer for loading, specifying a large buffer helps improve the loading performance.

5. CPU_PARALLELISM
This option can only be used on the SMP system and can indicate how many processes or threads are used to parse, convert, and format data.

6. Disk_Parallelism
This option specifies the number of processes or threads that write data to the disk.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.