Oracle SQL * Loader Architecture

Source: Internet
Author: User

Oracle SQL * Loader architecture 1. SQL * Loader architecture SQL * Loader controls the description of the entire load by an input control file, one or more data files are used as raw data, its detailed composition structure includes Input Datafiles --> Loader Control file, the original data File loaded to the database --> provides QL * Loader with information about searching and translating data. Log file --> During loading generated log information Bad Files --> some nonstandard data excluded, excluded by SQL * Loader, oracle may also remove Discard Files --> some physical records that do not meet the control file selection criteria. The above five complete parts will eventually import the data to the database. Of course, some components can be omitted. 2. the role and composition of the control file is a text file. The information recorded in the control file tells SQL * Where the Loader looks for data and how to translate data, and where to insert data into the control file is divided into three parts. The first part is about the external session information, such as some global options, row information, whether to skip special records, and so on infile. the clause specifies where to find the source data. The second part consists of one or more external table blocks, each block contains information about the imported table, such as the table name and column name. The third part is optional, if yes, precautions for writing the imported source data control file. The syntax structure is case-insensitive and can be used at the beginning of a row as a comment line, in the third part of the control file, use -- to comment the unsupported keywords constant and zone. 3. There may be multiple data files in the data file, these data files need to be specified in the control file. From the perspective of SQL * Loader, the data in the data file is recorded as one data file. Description data file records have three optional formats and fixed records. Format variable record format stream record format when the infile parameter is used in the control file, if the record format is not specified, the default is the stream record format. For example, when infile * is used, it is the stream record format. Below are several examples of different record formats.. fixed format: INFILE datafile_name "fix n" load data infile 'example. dat '"fix 11" -- indicates that the length of each record is fixed as one byte. into table example fields terminatedby', 'optionally enclosedby' "'(col1, col2) example. dat: 001, cd, 0002, fghi, -- the first record is cd, and the second record is fghi. The second record contains a linefeed of 00003, lmn, 1, "pqrs", 0005, uvwx, B. variable format: INFILE "datafile_name" "var n" load data infile 'example. dat '"var 3" -- use 3 BITs are used to describe the length of a record. into table example fields terminatedby ', 'optionally enclosedby' "'(col1 char (5), col2 char (7) example. dat: 009 hello, cd, 010 world, im, -- 009 indicates that the length of the first record is 9 bytes, indicating that the length of the second record is 10 bytes, such as 012my, nameis, c. stream record format: INFILE datafile_name ["str terminator_string"] load data infile 'example. dat '"str' |/N'" -- use | or linefeed to terminate a record. into table example fields terminatedby ', 'optionally enclosedby' "' (col1 char (5), col2 char (7) example. dat: hello, world, | james, bond, | the concept of logical record. Generally, a physical record in a data file is a logical record, that is, one record of the data file corresponds to one record of the database. SQL * Loader extends this function to form one logical record for multiple physical records, this combination generates a record SQL * Loader in the database and supports two policies to form a logical record combination with a fixed number of physical records to form a logical record that combines physical records that meet specific conditions and form a logical record 4. Data File Loading Method 1. traditional path import uses the generated SQL Insert statement to process the source data, and uses commit to submit and save the data. Each data import will generate some transactions to find available data blocks when inserting data, then, fill in the data block and use the following syntax to insert into table tpartition (P) VALUES when inserting INTO a single partition of the partition TABLE... multiple Load sessions are used for concurrent execution based on the multi-cpu system. Split the data file into multiple to load. 2. direct path import directly writes the data to the Oracle data file, and uses the High-level line mark of the block to save the data. During the parallel import direct path import of the data, data conversion occurs on the client rather than on the server. That is, the NLS parameter in the server parameter file is not used. You can set the NLS parameter in the control file or set the appropriate environment variable on the server, in the following example, hiredate date 'yyyymmdd' -- specify the format % export NLS_DATE_FORMAT = 'yyyymmdd' for HIREDATE in the control file -- set the NLS_DATE_FORMAT direct path on the server side to load a single partition or subpartition, other partitions during the loading can perform the DML operation load into table tpartition (P) VALUES... load into table t subpartition (P) VALUES... when using DIRECT path loading, you must specify DIRECT = true to support two different types of concurrency 1. load data to different partitions of a partitioned table or to different tables at the same time. it is divided into multiple servers and loaded into a single partition or a single table of the partition table. Finally, it combines the mounted temporary segments and saves them to the partition or table 3. both Comparison (in the following description, the traditional import method is called the former and the direct import method is called the latter). the former uses commit to save data, and the latter updates the high watermark to save data. the former generates redo records, and the latter generates redo records based on specific conditions. the former enforces all constraints, and the latter only enforces primarykey, unique, and notnull constraints d. the former will trigger the insert trigger, and the latter will not trigger the insert trigger e. the former supports cluster tables, and the latter does not support cluster Table f. when the former is used to insert data, other users can use DML tables. If the latter is used, the SQL * Loader 1 is displayed. SQL * Loader executable program (sqlldr) Location [oracle @ vmoel5u4 ~] $ Ls-lh $ ORACLE_HOME/bin/SQL *-rwxr-x -- x 1 oracle oinstall 634 K Mar 24 2012/u01/app/oracle/product/10.2.0/db_1/bin/ sqlldr 2. view sqlldr help information, [oracle @ oradb ~] $ Sqlldr SQL * Loader: Release 10.2.0.1.0-Production on Thu Sep 23 10:38:31 2010 Copyright (c) 1982,200 5, Oracle. all rights reserved. usage: SQLLDR keyword = value [, keyword = value,...] valid Keywords: userid -- ORACLE username/password control -- control file name log -- log file name bad -- bad file name data -- data file name discard -- discard file name discardmax -- number of discards to allow (Default all) skip -- number of logical records to skip (Default 0) load -- number of logical records to load (Default all) errors -- number of errors to allow (Default 50) rows -- number of rows in conventional path bind array or between direct path data saves (Default: Conventionalpath 64, Direct path all) bindsize -- size of conventional path bind array in bytes (Default 256000) silent -- suppre Ss messages during run (header, feedback, errors, discards, partitions) direct -- use direct path (Default FALSE) parfile -- parameter file: name of file that contains parameter specifications parallel -- do parallel load (Default FALSE) file -- file to allocate extents from skip_unusable_indexes -- disallow/allow unusable indexes or index partitions (Default FALSE) skip_index_maintenance -- do not main Tain indexes, mark affected indexes as unusable (Default FALSE) commit_discontinued -- commit loaded rows when load is discontinued (Default FALSE) readsize -- size of read buffer (Default 1048576) external_table -- use external table for load; NOT_USED, GENERATE_ONLY, EXECUTE (Default NOT_USED) columnarrayrows -- number of rows for direct path column array (Default 5000) streamsize -- size of dir Ect path stream buffer in bytes (Default 256000) multithreading -- use multithreading in direct path resumable -- enable or disable resumable for current session (Default FALSE) resumable_name -- text string to help identify resumable statement resumable_timeout -- wait time (in seconds) for RESUMABLE (Default 7200) date_cache -- size (in entries) of date conversion cache (Default 1000) PLEASE NOT E: Command-line parameters may be specified eitherby position or by keywords. an exampleof the former caseis 'sqlldr scott/tiger foo'; an example of the latter is 'sqlldr control = foo userid = scott/tiger '. one may specify parametersby position before but not after parameters specified by keywords. for example, 'sqlldr scott/tiger control = foo logfile = log' is allowed, but 'sqlldr scott/tiger control = f Oo log' is not, even though the position of the parameter 'log' is correct. 3. Combine the data file and control file [oracle @ vmoel5u4 ~] $ Vi car. ctl1LOAD DATAINFILE * into table carFIELDS terminated by ',' (maker, model, no_cyl, first_built_date date "yyyy/mm/dd", engine, hp, price) BEGINDATATalbot, 8/18, 02/03/8,295.00, ohv, 10/23 Talbot, 03/04, 8.9, 375.00/12/30, ohv, 01/04, 13.4 Talbot, 550.00, 14/40,/, ohv, Sunbeam, 06/23/13.9, ohv, 895.00, 12/30 Sunbeam, 02/28, 11.5, 570.00, 20/60, 02/24, 20.9/950.00, ohv M, Twin Cam, 6,1926/03/23, ohv, 20.9, 1125.00 Sunbeam, 20.9, 1927/03/23, ohv, 750.00, 16.9 Sunbeam, 550.00, 1927/09/10, ohv, Peugeot, 172, 4, 1928/09/28, sv, 6.4, 165.00 Austin, 7,4, 1922/01/22, sv, 7.2, 225.00 Austin, 12.8, 1922/01/01, sv, 550.00, 22.4 Austin, 1916/01/04, sv, 616.00 Lanchester, 38.4, 1875.00, 1919/01/08, ohv, 20.6, 950.00 Lanchester, 30/98, 1924/01/26, ohv, 01/08, 23.8 Vauxhall, 1475.00, sv, Vauxhal L, 23/60, 01/27, 22.4/1300.00, sv, SQL> conn hr/hrConnected. create table car (maker varchar2 (20), model varchar2 (20), no_cyl varchar2 (20), first_built_date date, engine varchar2 (20), hp number, price number (10, 2); Table created. [oracle @ vmoel5u4 ~] $ Sqlldr hr/hr control = car. ctl1SQL * Loader: Release 10.2.0.1.0-Production on Thu Mar 28 22:25:41 2013 Copyright (c) 1982,200 5, Oracle. all rights reserved. commit point reached-logical record count 17 [oracle @ vmoel5u4 ~] $ Sqlplus hr/hr SQL> select count (*) from car; COUNT (*) ---------- 17 4. Data File and control file separation for data loading [oracle @ vmoel5u4 ~] $ Cat car.txt Talbot, 8/18, 02/03, 8,295.00/10/23, ohv, 03/04 Talbot, 8.9, 375.00, 12/30/01/04, ohv, 13.4, Talbot,/, ohv, 550.00 Sunbeam, 14/40, 06/23/13.9, ohv, 895.00, 12/30 Sunbeam, 02/28, 11.5, 570.00/20/60, ohv, 02/24, 20.9,/, ohv, 950.00 Sunbeam, Twin Cam, 6,1926/03/23, ohv, 20.9, 1125.00 Sunbeam, 20.9, 1927/03/23, ohv, 750.00, 16.9 Sunbeam, 550.00, 1927/09/10, ohv, Peugeot, 172, 4, 1928/0 9/28, sv, 6.4, 165.00 Austin, 7.2, 1922/01/22, sv, 225.00, 12.8 Austin, 550.00, 1922/01/01, sv, 22.4, Austin, 1916/01/04, sv, 616.00 Lanchester, 38.4, 1875.00, 20.6, 950.00, 30/98, 01/08, 23.8, 1475.00, 23/60, 01/27/22.4, sv, 1300.00, [oracle @ vmoel5u4 ~] $ Cat car. ctl2LOAD DATAinfile '/home/oracle/car.txt' appendinto table hr. carfields terminated by "," (maker, model, no_cyl, first_built_date date "yyyy/mm/dd", engine, hp, price) [oracle @ vmoel5u4 ~] $ Sqlldr hr/hr control = car. ctl2SQL * Loader: Release 10.2.0.1.0-Production on Thu Mar 28 22:31:40 2013 Copyright (c) 1982,200 5, Oracle. all rights reserved. commit point reached-logical record count 17 imported successfully! SQL> conn hr/hrConnected. SQL> select count (*) from car; COUNT (*) ---------- 34 you can see that one append17 record is included. Note: The first four parts

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.