SQL server Bulk Insert command details, bulkinsert

Source: Internet
Author: User

SQL server Bulk Insert command details, bulkinsert

Bulk insert copies a data file to a database table or view in the format specified by the user.
Syntax:

BULK INSERT [ [ 'database_name'.][ 'owner' ].]{ 'table_name' FROM 'data_file' } WITH (  [ BATCHSIZE [ = batch_size ] ],   [ CHECK_CONSTRAINTS ],    [ CODEPAGE [ = 'ACP' | 'OEM' | 'RAW' | 'code_page' ] ],  [ DATAFILETYPE [ = 'char' | 'native'| 'widechar' | 'widenative' ] ],     [ FIELDTERMINATOR [ = 'field_terminator' ] ],  [ FIRSTROW [ = first_row ] ],  [ FIRE_TRIGGERS ],  [ FORMATFILE = 'format_file_path' ],  [ KEEPIDENTITY ],  [ KEEPNULLS ],  [ KILOBYTES_PER_BATCH [ = kilobytes_per_batch ] ],   [ LASTROW [ = last_row ] ],  [ MAXERRORS [ = max_errors ] ],  [ ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ],   [ ROWS_PER_BATCH [ = rows_per_batch ] ],  [ ROWTERMINATOR [ = 'row_terminator' ] ],     [ TABLOCK ],)

Parameters:
'Database _ name'
Is the name of the database that contains the specified table or view. If not specified, the system defaults to the current database.

'Owner'
Is the name of the table or view owner. The owner is optional when a user who performs a large-capacity copy operation has a specified table or view. If you do not specify the owner and do not have the specified table or view®SQL Server? The error message is returned and the large-capacity copy operation is canceled.

'Table _ name'
Is the name of the table or view in which the data is copied in large capacity. Only those columns can be used to reference views with the same base table. For more information about the restrictions on data replication to a view, see INSERT.

'Data _ file'
Is the complete path of the data file, which contains the data to be copied to the specified table or view. Bulk insert copies data (including network, floppy disk, and hard disk) from the disk ). Data_file must specify a valid path from the Server running SQL Server. If data_file is a remote file, specify a general naming rule (UNC) Name.

BATCHSIZE [= batch_size]
Number of rows in the batch. Each batch is copied to the server as a transaction. SQL Server submits or rolls back each batch of transactions (when a failure occurs. By default, all data in the specified data file is processed in one batch.

CHECK_CONSTRAINTS
Specify to check any constraints of table_name in the large-capacity copy operation. By default, the constraints are ignored.

CODEPAGE [= 'acp '| 'oem' | 'raw '| 'Code _ page']
Specifies the data page in the data file. CODEPAGE is applicable only when the data contains char, varchar, or text columns with a character value greater than 127 or less than 32. CODEPAGE value description ACP char, varchar, or text columns from ANSI/Microsoft Windows®The code page ISO 1252 is converted to the SQL Server code page. The OEM (default) char, varchar, or text columns are converted from the system OEM code page to the SQL Server code page. RAW does not convert from one code page to another. This is the fastest option. Code_page: The specific code page number, such as 850.

DATAFILETYPE [= {'Char '| 'native' | 'widechar '| 'widenative'}]
The specified bulk insert statement uses the specified default value to perform the copy operation. DATAFILETYPE value description char (default value) performs a large-capacity copy operation from a data file containing character data. Native uses the native data type to perform large-capacity replication. The data file to be loaded is created by copying large data volumes. This copy is performed from SQL Server using the bcp utility. Widechar performs a large-capacity copy operation from a data file containing Unicode characters. Widenative performs the same large-capacity copy operation as native. The difference is that char, varchar, and text columns are stored as Unicode in data files. The data file to be loaded is created by copying large data volumes. This copy is performed from SQL Server using the bcp utility. This option is a higher-performance alternative to the widechar option and is used to transmit data from one computer running SQL Server to another using data files. This option is used when transmitting data that contains ANSI extended characters to take advantage of native mode performance.

FIELDTERMINATOR [= 'field _ terminator']
Specifies the field Terminator used for char and widechar data files. The default field Terminator is/t (Tab ).

FIRSTROW [= first_row]
Specifies the row number of the first row to be copied. The default value is 1, indicating the first row of the specified data file.

FIRE_TRIGGERS
Any insert trigger defined in the target table will be executed during the large-capacity copy operation. If FIRE_TRIGGERS is not specified, no insert trigger is executed.

FORMATFILE [= 'format _ file_path ']
Specifies the full path of a formatted file. The format file describes the data files that contain the storage response. These storage responses are created in the same table or view using the bcp utility. Format files should be used in the following situations: data files contain more or fewer columns than tables or views. Columns use different sequence. The column Delimiter is changed. There are other changes to the data format. Generally, a format file is created through the bcp utility and modified in a text editor as needed. For more information, see bcp utility.

KEEPIDENTITY
The value of the specified ID column exists in the import file. If no KEEPIDENTITY is specified, the value of this column is ignored in the imported data file, SQL Server automatically assigns a unique value based on the seed value and increment value specified during table creation. If the data file does not contain the ID column in the table or view, a format file is used to specify that the ID column in the table or view should be ignored during data import; SQL Server automatically assigns a unique value to this column. For more information, see dbcc checkident.

KEEPNULLS
Specify that a null value should be retained for the hollow column in the large-capacity copy operation, instead of assigning the default value to the inserted column.

KILOBYTES_PER_BATCH [= kilobytes_per_batch]
Specify the approximate number of kilobytes (KB) of data in each batch ). By default, KILOBYTES_PER_BATCH is unknown.

LASTROW [= last_row]
Specifies the row number of the last row to be copied. The default value is 0, indicating the last row in the specified data file.

MAXERRORS [= max_errors]
Specifies the maximum number of errors that may occur before the large-capacity copy operation is canceled. Each row that cannot be imported by the large-capacity copy operation will be ignored and counted as an error. If max_errors is not specified, the default value is 0.

ORDER ({column [ASC | DESC]} [,... n])
Specifies how the data in the data file is sorted. If the loaded data is sorted by the clustered index in the table, the performance of large-capacity replication operations can be improved. If data files are sorted in different ORDER or the table does not have clustered indexes, the ORDER clause is ignored. The column name must be valid in the target table. By default, the large-capacity insert operation assumes that the data files are not sorted. N indicates that you can specify placeholders for multiple columns.

ROWS_PER_BATCH [= rows_per_batch]
Specify the number of rows (rows_per_bacth) for each batch of processed data ). If the BATCHSIZE is not specified, the entire data file is sent to the server as a single transaction. The server optimizes large-capacity loading based on rows_per_batch. By default, ROWS_PER_BATCH is unknown.

ROWTERMINATOR [= 'row _ terminator']
Specifies the row Terminator used for char and widechar data files. The default value is/n (line break ).

TABLOCK
Specifies to obtain a table-Level Lock during a large-capacity copy operation. If the table has no index and TABLOCK is specified, the table can be loaded by multiple clients at the same time. By default, the locking behavior is determined by the table option table lock on bulk load. Controlling the lock only during the large-capacity replication operation will reduce the table lock contention and greatly improve the performance. Note that the bulk insert statement can be executed in user-defined transactions. For a user-defined transaction that uses bulk insert statements and BATCHSIZE clauses to load data to a table or view that uses multiple batches, rollback: it rolls back all batches sent to SQL Server. Only members of the sysadmin and bulkadmin fixed server roles can execute bulk insert.

In this example, order details are imported from the specified data file. The file uses the vertical bar (|) character as the field Terminator and the line terminator |/n.

BULK INSERT Northwind.dbo.[Order Details] FROM 'f:/orders/lineitem.tbl' WITH (  FIELDTERMINATOR = '|',     ROWTERMINATOR = '|/n'  )

In this example, the FIRE_TRIGGERS parameter is specified.

Bulk insert Northwind. dbo. [Order Details] FROM 'f:/orders/lineitem. tbl 'WITH (FIELDTERMINATOR =' | ', ROWTERMINATOR =':/N', FIRE_TRIGGERS) ========================================================== ================================ bulk insert [database_name. [schema_name]. | schema_name.] [table_name | view_name] FROM 'data _ file' [WITH ([[,] BATCHSIZE = batch_size] -- BATCHSIZE command to set the number of records that can be inserted into the table in a single transaction [[,] CH ECK_CONSTRAINTS] -- specifies that all constraints on the target table or view must be checked during a large-capacity import operation. If the CHECK_CONSTRAINTS option is not available, all the CHECK and foreign key constraints will be ignored, and after this operation, the constraints of the table will be marked as untrusted. [[,] CODEPAGE = {'acp '| 'oem' | 'raw '| 'Code _ page'}] -- specifies the data page in the data file [[,] DATAFILETYPE = {'Char '| 'native' | 'widechar '| 'widenative'}] -- specifies that bulk insert uses the specified data file type value to perform the import operation. [[,] FIELDTERMINATOR = 'field _ terminator '] -- identifies the separator [[,] FIRSTROW = first_row] -- specifies the row number of the first row to be loaded. The default value is [[,] FIRE_TRIGGERS] In the first row of the data file -- whether to enable the trigger [[,] FORMATFILE = 'format _ file_path '] [[,] KEEPIDENTITY] -- specify the value of the identity in the imported data file to identify the column [[,] KEEPNULLS] -- specify that a null value should be retained for an empty column during the large-capacity import operation, without inserting any default value [[,] KILOBYTES_PER_BATCH = kilobytes_per_batch] [[,] LASTROW = last_row] -- specify the row number of the last row to be loaded [[,] MAXERRORS = max_errors] -- specifies the maximum number of syntax errors allowed in data. If this number is exceeded, the large-capacity import operation will be canceled. [[,] ORDER ({column [ASC | DESC]} [,... n])] -- specifies how to sort data in the data file [[,] ROWS_PER_BATCH = rows_per_batch] [[,] ROWTERMINATOR = 'row _ terminator '] -- mark the delimiter [[,] TABLOCK] -- specifies the duration for a large-capacity import operation to obtain a table-Level Lock [,] ERRORFILE = 'file _ name'] -- specifies the file used to collect rows with incorrect format and cannot be converted to an ole db row set. )]

The following is a simple example.

bulk insert xsxt.dbo.tabletest from 'c:/data.txt'  with(  FIELDTERMINATOR=',',  ROWTERMINATOR='/n' )

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.