Performance of inserting or batch importing large amounts of data

Source: Internet
Author: User
Tags flushes
I often see the performance of customers testing SQLServer's INSERT or batch import of a large amount of data. The TSQL script for testing a large number of inserts is as follows: usemyDBgocreatetablet1 (idintprimarykeynotnullidentity (200), namevarchar (500), memochar (); gotrun

I often see the performance of customers testing SQL Server INSERT or batch import of a large amount of data. The TSQL script for testing a large number of inserts is as follows: use myDB go create table t1 (id int primary key not null identity (1, 1), name varchar (200 ), memo char (500); go trun

I often see the performance of customers testing SQL Server INSERT or batch import of a large amount of data. The TSQL script for testing a large number of INSERT statements is as follows:

Use myDB

Go

Createtable t1 (id intprimarykeynotnullidentity (200), name varchar (500), memo char ));

Go

Truncatetable t1

Go

Declare @ I int

Set @ I = 0

Declare @ beginTime datetime, @ endTime datetime

Set @ beginTime = GETDATE ()

While (@ I <200000)

Begin

Insertinto t1 (name, memo) values ('test' + str (RAND () * 100000000), 'memo' + str (RAND () * 100000000 ))

Set @ I = @ I + 1

End

Set @ endTime = GETDATE ()

Select @ endTime-@ beginTime

The above script tests the time required to insert 200000 rows of data. Is there a problem with this script?

The syntax is correct, but the writing is not optimized. If you use performance monitor

You will find that the number of log flushes/sec during data insertion is very high. It reaches 5000 on my machine. Log flushes occurs once, which means that the SQL server needs to write the transaction log once. 5000 log writes per second lead to a large number of write operations on the disk. This disk write operation affects the performance of the above batch. The above script takes about 40 seconds on my machine.

How can we improve the performance of this script? How can I reduce log flushes/sec to reduce disk write operations? The answer is to use Batch Size, as shown in the script below.

Truncatetable t1

Go

Declare @ I int

Set @ I = 0

Declare @ beginTime datetime, @ endTime datetime

Set @ beginTime = GETDATE ()

Declare @ batchSize int

Set @ batchSize = 1000

While (@ I <200000)

Begin

If (@ I % @ batchSize = 0)

Begin

If (@ TRANCOUNT> 0) COMMITTRAN

BEGINTRAN

End

Insertinto t1 (name, memo) values ('test' + str (RAND () * 100000000), 'memo' + str (RAND () * 100000000 ))

Set @ I = @ I + 1

End

If (@ TRANCOUNT> 0) COMMITTRAN

Set @ endTime = GETDATE ()

Select @ endTime-@ beginTime

Yellow highlight is my improvement. I ran it on the same machine. A miracle occurred. It only took more than seven seconds. More than 5 times faster! If you are looking at performance

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.