Bulk insert command

Source: Internet
Author: User
Tags bulk insert
BULK INSERT
Copy a data file to a database table or view in the format specified by the user.

Syntax
Bulk insert [['database _ name'.] ['owner'].] {'table _ name' FROM 'daTa_file '}
[
(
[BATCHSIZE [= batch_size]
[[,] CHECK_CONSTRAINTS]
[[,] CODEPAGE [= 'acp '| 'oem' | 'Raw' | 'coDe_page ']
[[,] DATAFILETYPE [=
{'Char '| 'native' | 'widechar '| 'widenative'}]
[[,] FIELDTERMINATOR [= 'field _ terminator']
[[,] FIRSTROW [= first_row]
[[,] FIRE_TRIGGERS]
[[,] FORMATFILE = 'format _ file_path ']
[[,] KEEPIDENTITY]
[[,] KEEPNULLS]
[[,] KILOBYTES_PER_BATCH [= kilobytes_per_batch]
[[,] LASTROW [= last_row]
[[,] MAXERRORS [= max_errors]
[[,] ORDER ({column [ASC | DESC]} [,... n])]
[[,] ROWS_PER_BATCH [= rows_per_batch]
[[,] ROWTERMINATOR [= 'row _ terminator ']
[, [TABLOCK]
)
]

Parameters
'Database _ name'

Is the name of the database that contains the specified table or view. If not specified, the system defaults to the current database.

'Owner'

Is the name of the table or view owner. The owner is optional when a user who performs a large-capacity copy operation has a specified table or view. If the user who does not specify the owner and performs a large-capacity copy operation does not have the specified table or view, then Microsoft SQL Server? The error message is returned and the large-capacity copy operation is canceled.

'Table _ name'

Is the name of the table or view in which the data is copied in large capacity. Only those columns can be used to reference views with the same base table. For more information about the restrictions on data replication to a view, see INSERT.

'DaTa_file'

Is the complete path of the data file, which contains the data to be copied to the specified table or view. Bulk insert copies data (including network, floppy disk, and hard disk) from the disk ).

DaTa_file must specify a valid path from the Server running SQL Server. If daTa_file is a remote file. Specify a Universal Naming rule (UNC.

BATCHSIZE [= batch_size]

Number of rows in the batch. Each batch is copied to the server as a transaction. SQL Server submits or rolls back each batch of transactions (when a failure occurs. By default, all data in the specified data file is processed in one batch.

CHECK_CONSTRAINTS

Specify to check any constraints of table_name in the large-capacity copy operation. By default, the constraints are ignored.

CODEPAGE [= 'acp '| 'oem' | 'raw '| 'coDe_page ']

Specifies the data page in the data file. CODEPAGE is applicable only when the data contains char, varchar, or text columns with a character value greater than 127 or less than 32.

CODEPAGE value description
Columns of the ACP char, varchar, or text data type are converted from ANSI/Microsoft Windows code page ISO 1252 to the SQL Server code page.
The OEM (default) char, varchar, or text columns are converted from the system OEM code page to the SQL Server code page.
RAW does not convert from one code page to another. This is the fastest option.
CoThe specific code page number of de_page, such as 850.


DATAFILETYPE [= {'Char '| 'native' | 'widechar '| 'widenative'}]

The specified bulk insert statement uses the specified default value to perform the copy operation.

DATAFILETYPE value description
Char (default value) performs a large-capacity copy operation from a data file containing character data.
Native uses the native data type to perform large-capacity replication. The data file to be loaded is created by copying large data volumes. This copy is performed from SQL Server using the bcp utility.
Widechar performs a large-capacity copy operation from a data file containing Unicode characters.
Widenative performs the same large-capacity copy operation as native. The difference is that char, varchar, and text columns are stored as Unicode in data files. The data file to be loaded is created by copying large data volumes. This copy is performed from SQL Server using the bcp utility. This option is a higher-performance alternative to the widechar option and is used to transmit data from one computer running SQL Server to another using data files. This option is used when transmitting data that contains ANSI extended characters to take advantage of native mode performance.


FIELDTERMINATOR [= 'field _ terminator']

Specifies the field Terminator used for char and widechar data files. The default field Terminator is \ t (Tab ).

FIRSTROW [= first_row]

Specifies the row number of the first row to be copied. The default value is 1, indicating the first row of the specified data file.

FIRE_TRIGGERS

Any insert trigger defined in the target table will be executed during the large-capacity copy operation. If FIRE_TRIGGERS is not specified, no insert trigger is executed.

FORMATFILE [= 'format _ file_path ']

Specifies the full path of a formatted file. The format file describes the data files that contain the storage response. These storage responses are created in the same table or view using the bcp utility. The format file should be used in the following situations:

Data files contain more or fewer columns than tables or views.


Columns use different sequence.


The column Delimiter is changed.


There are other changes to the data format. Generally, a format file is created through the bcp utility and modified in a text editor as needed. For more information, see bcp utility.
KEEPIDENTITY

The value of the specified ID column exists in the import file. If no KEEPIDENTITY is specified, the value of this column is ignored in the imported data file, SQL Server automatically assigns a unique value based on the seed value and increment value specified during table creation. If the data file does not contain the ID column in the table or view, a format file is used to specify that the ID column in the table or view should be ignored during data import; SQL Server automatically assigns a unique value to this column. For more information, see dbcc checkident.

KEEPNULLS

Specify that a null value should be retained for the hollow column in the large-capacity copy operation, instead of assigning the default value to the inserted column.

KILOBYTES_PER_BATCH [= kilobytes_per_batch]

Specify the approximate number of kilobytes (KB) of data in each batch ). By default, KILOBYTES_PER_BATCH is unknown.

LASTROW [= last_row]

Specifies the row number of the last row to be copied. The default value is 0, indicating the last row in the specified data file.

MAXERRORS [= max_errors]

Specifies the maximum number of errors that may occur before the large-capacity copy operation is canceled. Each row that cannot be imported by the large-capacity copy operation will be ignored and counted as an error. If max_errors is not specified, the default value is 0.

ORDER ({column [ASC | DESC]} [,... n])

Specifies how the data in the data file is sorted. If the loaded data is sorted by the clustered index in the table, the performance of large-capacity replication operations can be improved. If data files are sorted in different ORDER or the table does not have clustered indexes, the ORDER clause is ignored. The column name must be valid in the target table. By default, the large-capacity insert operation assumes that the data files are not sorted.

N

Yes indicates that you can specify placeholders for multiple columns.

ROWS_PER_BATCH [= rows_per_batch]

Specify the number of rows (rows_per_bacth) for each batch of processed data ). If the BATCHSIZE is not specified, the entire data file is sent to the server as a single transaction. The server optimizes large-capacity loading based on rows_per_batch. By default, ROWS_PER_BATCH is unknown.

ROWTERMINATOR [= 'row _ terminator']

Specifies the row Terminator used for char and widechar data files. The default value is \ n (line break ).

TABLOCK

Specifies to obtain a table-Level Lock during a large-capacity copy operation. If the table has no index and TABLOCK is specified, the table can be loaded by multiple clients at the same time. By default, the locking behavior is determined by the table option table lock on bulk load. Controlling the lock only during the large-capacity replication operation will reduce the table lock contention and greatly improve the performance.

Note
The bulk insert statement can be executed in user-defined transactions. For a user-defined transaction that uses bulk insert statements and BATCHSIZE clauses to load data to a table or view that uses multiple batches, rollback: it rolls back all batches sent to SQL Server.

Permission
Only members of the sysadmin and bulkadmin fixed server roles can execute bulk insert.

Example
In this example, order details are imported from the specified data file. The file uses the vertical bar (|) character as the field Terminator and the line terminator | \ n.

Bulk insert Northwind. dbo. [Order Details]
FROM 'f: \ orders \ lineitem. tbl'
WITH
(
FIELDTERMINATOR = '| ',
ROWTERMINATOR = '| \ N'
)

In this example, the FIRE_TRIGGERS parameter is specified.

Bulk insert Northwind. dbo. [Order Details]
FROM 'f: \ orders \ lineitem. tbl'
WITH
(
FIELDTERMINATOR = '| ',
ROWTERMINATOR = ': \ n ',
FIRE_TRIGGERS
)
 
=============================================================
BULK INSERT
[Database_name. [schema_name]. | schema_name.] [table_name | view_name]
FROM 'daTa_file'
[
(
[[,] BATCHSIZE = batch_size] -- BATCHSIZE command to set the number of records that can be inserted into the table in a single transaction
[[,] CHECK_CONSTRAINTS] -- specifies that all constraints on the target table or view must be checked during a large-capacity import operation. If the CHECK_CONSTRAINTS option is not available, all the CHECK and foreign key constraints will be ignored, and after this operation, the constraints of the table will be marked as untrusted.
[[,] CODEPAGE = {'acp '| 'oem' | 'Raw' | 'coDe_page '}] -- specifies the data page in the data file.
[[,] DATAFILETYPE =
{'Char '| 'native' | 'widechar '| 'widenative'}] -- specifies that bulk insert uses the specified data file type value for import.




[[,] FIELDTERMINATOR = 'field _ terminator '] -- mark the content to be separated
[[,] FIRSTROW = first_row] -- specifies the row number of the first row to be loaded. The default value is the first line in the specified data file.
[[,] FIRE_TRIGGERS] -- whether to start the trigger
[[,] FORMATFILE = 'format _ file_path ']
[[,] KEEPIDENTITY] -- specifies the id value in the imported data file for identifying the column
[[,] KEEPNULLS] -- specifies that a null value should be retained for an empty column during the large-capacity import operation without inserting any default values for the column
[[,] KILOBYTES_PER_BATCH = kilobytes_per_batch]
[[,] LASTROW = last_row] -- specifies the row number of the last row to be loaded
[[,] MAXERRORS = max_errors] -- specifies the maximum number of syntax errors allowed in the data. If this number is exceeded, the large-capacity import operation will be canceled.
[[,] ORDER ({column [ASC | DESC]} [,... n])] -- specifies how to sort data in the data file
[[,] ROWS_PER_BATCH = rows_per_batch]
[[,] ROWTERMINATOR = 'row _ terminator '] -- identifies the line separator.
[[,] TABLOCK] -- specifies the duration for a large-capacity import operation to obtain a table-Level Lock.
[[,] ERRORFILE = 'file _ name'] -- specifies the file used to collect rows with incorrect format and cannot be converted to an ole db row set.
)]
The following is a simple example.
bulk insert xsxt.dbo.tabletest from 'c:\data.txt'
with(
FIELDTERMINATOR=',',
ROWTERMINATOR='\n'
)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.