SQL Server data insertion performance note, SQL Server Insert NOTE

Source: Internet
Author: User

SQL Server data insertion performance note, SQL Server Insert NOTE

I haven't paid much attention to SQL Performance for a long time. Because the recent project neither has tens of millions of massive data nor has too many performance requirements, it does not have to work hard on performance. However, I spoke about the program as soon as I talked with my friends about it. He said that he used Delphi to import data, and it was very slow to insert data. With my personal understanding of SQL, we recommend that you use the batch insert method, and tell him carefully that it should be much faster. In actual work, similar to batch data import functions are very common, maybe we accidentally screwed up the server. So what exactly can we do to keep a live path for the server so that the user does not have to upload a movie and then view the upload results? I made a small experiment and briefly explained my views.

Environment:

CPU: i7;

Memory: 6 GB;

Database: SqlServer2008;

Data volume: 10 W

  

Lab content:

Create two LoopInsert and BatchInsert functions. Use the join method in LoopInsert to generate insert into xxx values (...) insert into xxx values (...)... SQL Execution and return the execution time of the SQL statement. In BatchInsert, insert into xxx values (...) is generated by concatenating SQL statements (...), (...), (...) returns the execution time of the SQL statement. Use the console program to execute two functions multiple times and output the execution result.

 

Table Structure:

CREATE TABLE TQRCode    (      ID INT PRIMARY KEY  IDENTITY(1, 1) ,      Name NVARCHAR(300) ,      Remark NVARCHAR(300)    )

  

C # implementation code:

1 public class DataInertTest 2 {3 /// <summary> 4 // insert loop 5 /// </summary> 6 /// <returns> execution time (seconds) </returns> 7 public double LoopInsert (int count) 8 {9 StringBuilder SQL = new StringBuilder (); 10 for (int I = 0; I <count; I ++) 11 {12 SQL. append ("Insert into TQRCode (Name, Remark) values ('This is the first "). append (I ). append ("data entry", "This is the first "). append (I ). append ("items _ remark')"); 13} 14 // time Statistics 15 var stopwatch = new Stopwatch (); 16 stopwatch. start (); 17 new Helper (). excute (SQL. toString (); 18 return stopwatch. elapsed. totalMilliseconds; 19} 20 21 /// <summary> 22 // batch insert 23 /// </summary> 24 /// <returns> execution time (seconds) </returns> 25 public double BatchInsert (int count) 26 {27 StringBuilder SQL = new StringBuilder (); 28 SQL. append ("Insert into TQRCode (Name, Remark) values"); 29 for (int I = 0; I <count; I ++) 30 {31 32 SQL. append ("('This is the first "). append (I ). append ("data entry", "This is the first "). append (I ). append ("Data _ remark')"); 33 if (I % 500 = 0) 34 {35 SQL. append ("Insert into TQRCode (Name, Remark) values"); 36} 37 else if (I <count-1) 38 {39 SQL. append (","); 40} 41} 42 43 // time statistics 44 var stopwatch = new Stopwatch (); 45 stopwatch. start (); 46 new Helper (). excute (SQL. toString (); 47 return stopwatch. elapsed. totalMilliseconds; 48} 49}C # implementation code

Note: A maximum of 1000 data records can be inserted in a batch in sqlserver. Otherwise, The message "the number of row value expressions in the INSERT statement exceeds The maximum allowed number of 1000 row values" is displayed.

 

Test code:

1 class Program 2 {3 static void Main (string [] args) 4 {5 for (int I = 0; I <3; I ++) 6 {7 var obj = new DataInertTest (); 8 var t1 = obj. loopInsert (100000); 9 var t2 = obj. batchInsert (100000); 10 11 Console. writeLine ("LoopInsert: {0}", t1); 12 Console. writeLine ("BatchInsert: {0}", t2); 13 Console. writeLine ("--------------------------------------------------"); 14 15} 16 Console. readLine (); 17} 18}Test code

  

Test results:

The execution results were obvious three times, and the efficiency gap was more than 10 times. Batch insert is faster than loop insert because each insert into in SQL Server is an independent transaction, and 500 rows of data are inserted cyclically, while 500 rows of data are inserted at a time, there is only one transaction. The consumption of transactions is naturally reduced. In addition, frequent transaction commits have a considerable impact on the database performance, which also affects the overall system performance (hey, the server may be suspended if you are not careful ).

It should be noted that in the test, because the data volume is small, both methods adopt a warehouse receiving method, which can reduce the number of database connections. But there is a major drawback: memory consumption will be high. The SQL concatenation of 10 million data is good. If it is million rows, it may not be. Therefore, if a single piece of data is large, we recommend that you submit it every several hundred or thousands of rows. The specific size of this number must be tailored to balance the memory consumption.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.