SQL Server Bulk Insert command verbose _mssql

Source: Internet
Author: User
Tags bulk insert naming convention first row

BULK Insert copies a data file into a database table or view in a user-specified format.
Grammar:

BULK INSERT [[' database_name '.] [' owner '].] {' table_name ' from ' data_file '} 
With (
  [batchsize [= Batch_size]], 
  [check_constraints],  
  [CODEPAGE [= ' ACP ' | ' OEM ' | ' RAW ' | ' Code_page ']],
  [datafiletype [= ' char ' | ' Native ' | ' Widechar ' | ' Widenative ']],   
  [fieldterminator [= ' field_terminator ']],
  [firstrow [= First_row]],
  [fire_triggers ],
  [formatfile = ' Format_file_path '],
  [keepidentity], [
  keepnulls],
  [kilobytes_per_batch [= kilo  Bytes_per_batch]], 
  [lastrow [= Last_row]],
  [maxerrors [= Max_errors]],
  [Order ({column [ASC | DESC]} [,... n])], 
  [rows_per_batch [= Rows_per_batch]],
  [rowterminator [= ' row_terminator ']],
   
    [TABLOCK]


   

Parameters:
' database_name '
is the name of the database that contains the specified table or view. If not specified, the system defaults to the current database.

' owner '
is the name of the owner of the table or view. Owner is optional when the user performing the bulk copy operation has the specified table or view. If owner is not specified and the user performing the bulk copy operation does not own the specified table or view, then Microsoft®sql Server? An error message is returned and the bulk copy operation is canceled.

' table_name '
is the name of the table or view in which the bulk copy data is located. You can only use all of those columns to refer to the view that contains the same base table. For more information about the limitations of copying data to a view, see INSERT.

' data_file '
is the full path to the data file that contains the data to be copied to the specified table or view. BULK inserts copy data from the disk (including network, floppy disk, hard disk, etc.). Data_file must specify a valid path from the server that is running SQL Server. If Data_file is a remote file, specify a Universal naming convention (UNC) name.

batchsize [= batch_size]
Specifies the number of rows in the batch. Each batch is replicated to the server as a transaction. SQL Server commits or rolls back (on failure) each batch of transactions. By default, all data in the specified data file is a batch.

check_constraints
Specifies that any constraints on TABLE_NAME are checked during bulk copy operations. By default, constraints are ignored.

CODEPAGE [= ' ACP ' | ' OEM ' | ' RAW ' | ' Code_page ']
Specifies the code page for the data in the data file. CODEPAGE is applicable only if the data contains char, varchar, or text columns with a character value greater than 127 or less than 32. CODEPAGE values describe the columns of the ACP char, varchar, or text data type from the Ansi/microsoft Windows® code page ISO 1252 to the SQL Server code page. The columns of the OEM (default) char, varchar, or text data type are converted from the System OEM code page to the SQL Server code page. RAW does not convert from one code page to another; This is the fastest option. Code_page a specific code page number, such as 850.

DataFileType [= {' Char ' | ' Native ' | ' Widechar ' | ' Widenative '}]
Specifies that BULK inserts perform the copy operation using the specified default value. The DataFileType value describes char (the default) to perform a bulk copy operation from a data file that contains character data. Native uses native (database) data types to perform bulk copy operations. The data file to be loaded is created by bulk copy data, which is done from SQL Server using the bcp utility. Widechar performs a bulk copy operation from a data file that contains Unicode characters. Widenative performs the same bulk copy operation as native, where the char, varchar, and text columns are stored as Unicode in the data file. The data file to be loaded is created by bulk copy data, which is done from SQL Server using the bcp utility. This option is a more high-performance alternative to the WIDECHAR option, and it is used to transfer data from a computer running SQL Server to another computer using a data file. This option is used to take advantage of the performance of the native mode when data that contains ANSI extended characters is passed.

fieldterminator [= ' field_terminator ']
Specifies the field terminator used for char and widechar data files. The default field terminator is/t (tab).

FirstRow [= First_row]
Specifies the line number of the first row to be copied. The default value is 1, which indicates the first row in the specified data file.

fire_triggers
Specifies that any insert triggers defined in the destination table will be executed during the bulk copy operation. If Fire_triggers is not specified, no insert triggers are executed.

formatfile [= ' Format_file_path ']
Specifies the full path of a format file. The format file describes the data files that contain the stored responses that were created in the same table or view using the bcp utility. The format file should be used when the data file contains more or fewer columns than the table or view. Columns are in different order. The column delimiter changes. There are other changes to the data format. Typically, a format file is created by using the bcp utility and modified with a text editor as needed. For more information, see bcp utility.

keepidentity
Specifies that the value of the identity column exists in the import file. If keepidentity is not specified, the identity value of this column in the imported data file is ignored, and SQL Server automatically assigns a unique value based on the seed and increment values specified when the table was created. If the data file does not contain an identity column in the table or view, use a format file to specify that the identity column in the table or view should be ignored when the data is imported, and that SQL Server automatically assigns a unique value to this column. For more information, see DBCC checkident.

Keepnulls
Specifies that empty columns in bulk copy operations should retain a null value instead of assigning a default value to the inserted column.

Kilobytes_per_batch [= Kilobytes_per_batch]
Specifies the approximate kilobytes (KB) of data in each batch. By default, Kilobytes_per_batch is unknown.

lastrow [= Last_row]
Specifies the line number of the last line to copy. The default value is 0, which indicates the last row in the specified data file.

maxerrors [= max_errors]
Specifies the maximum number of errors that may occur before the bulk copy operation is canceled. Each row that cannot be imported by a bulk copy operation is ignored and counted as an error. If max_errors is not specified, the default value is 0.

Order ({column [ASC | DESC]} [,... n])
Specifies how the data in the data file is sorted. If the loaded data is sorted based on the clustered index in the table, the performance of the bulk copy operation can be increased. If the data file is sorted based on a different order, or if there is no clustered index in the table, the ordering clause is ignored. The given column name must be a valid column in the destination table. By default, the bulk insert operation assumes that the data file is not sorted. N is a placeholder that indicates that multiple columns can be specified.

Rows_per_batch [= Rows_per_batch]
Specifies the number of rows (that is, rows_per_bacth) for each batch of data processed. Used when no batchsize is specified, causing the entire data file to be sent to the server as a single transaction. The server optimizes the bulk load according to the rows_per_batch. By default, Rows_per_batch is unknown.

rowterminator [= ' row_terminator ']
Specifies the line terminator to use for char and widechar data files. The default value is/n (line break).

TABLOCK
Specifies that a table-level lock is acquired for the bulk copy operation. If the table is not indexed and TABLOCK is specified, the table can be loaded at the same time by multiple clients. By default, the locking behavior is determined by the Table option table lock on bulk load. Controlling locks only during bulk copy operations reduces lock contention on the table and greatly improves performance. Note Bulk INSERT statements can be executed in a user-defined transaction. For a user-defined transaction that uses a BULK INSERT statement and a batchsize clause to load data into a table or view that uses multiple batches, rolling back it rolls back all batches sent to SQL Server. Permissions only the sysadmin and bulkadmin fixed server role members can perform BULK inserts.

Example This example imports order details from a specified data file that uses the vertical bar (|) character as the field terminator, using |/n as the line terminator.

BULK INSERT northwind.dbo. [Order Details] From ' F:/orders/lineitem.tbl ' 
with (
  fieldterminator = ' | ',   
  rowterminator = ' |/n '  
)

This example specifies the Fire_triggers parameter.

BULK INSERT northwind.dbo. [Order Details] From ' F:/orders/lineitem.tbl ' with (fieldterminator = ' | ', Rowterminator = ': N ', fire_triggers) = = = ========================================================= BULK INSERT [database_name. [Schema_name]. | Schema_name. ] [table_name | view_name] FROM ' data_file ' [[[,] batchsize = batch_size]--batchsize directive to be set in a single transaction The number of records inserted into the table [[,] check_constraints]--Specifies that all constraints to the target table or view must be checked during a bulk-import operation. 
 Without the check_constraints option, all CHECK and FOREIGN KEY constraints are ignored, and the table's constraints are marked as untrusted after this operation. [[,] CODEPAGE = {' ACP ' | ' OEM ' | ' RAW ' | ' Code_page '}]--Specifies the code page for the data in the data file [[,] datafiletype = {' char ' | ' Native ' | ' Widechar ' | 
 ' Widenative '}]--Specifies that BULK inserts perform the import operation using the specified data file type value. [[,] fieldterminator = ' field_terminator ']--identifies the symbol that separates the content [[,] firstrow = First_row]--Specifies the line number of the first row to load. The default value is the first row [[,] fire_triggers] in the specified data file--whether to start the trigger [[,] formatfile = ' Format_file_path '] [[,] keepidentity] --Specify the number of importsThe identity value in the file is used to identify the column [[,] keepnulls]--Specifies that the empty column should retain a null value during the bulk import operation, without inserting any default values for the column [[,] Kilobytes_per_batch = Kilobytes_per_ba TCH] [[,] lastrow = Last_row]--Specifies the line number of the last line to be loaded [[,] maxerrors = max_errors]--Specifies the maximum number of grammatical errors that are allowed to occur in the data, after which the bulk guide is canceled 
 into the operation. [[,] Order ({column [ASC | DESC]} [,... N]]--Specify how the data in the data file is sorted [[,] rows_per_batch = Rows_per_batch] [[,] rowterminator = ' Row_termina Tor ']--identifies a delimited line of symbols [[,] TABLOCK]--Specifies that a table-level lock is fetched for the duration of the bulk-import operation [[,] errorfile = ' file_name ']--Specifies that the collection format is incorrect and cannot be converted to OLE 
  The file for the row of the DB rowset.  )]

Here are a few examples of simple applications

Bulk Insert Xsxt.dbo.tabletest from ' C:/data.txt ' with 
 ( 
 fieldterminator= ', ', 
 rowterminator= '/n ' 
)
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.