When we are in the use of MySQL is inevitably encountered in a large number of data inset situation, usually the simplest way is to write an insert, and then through the loop to assign values, BULK INSERT database:
Save Rddform for (int i = 0; i < rddformlist. Count; i++) { string cmdtext = "INSERT into Rddform (Id,createdtime,modifiedtime,createdby,modifiedby,formtype) VALUES (' "+ rddformlist[i]. Rddformguid + "', '" + rddformlist[i]. Createtime + "', '" + rddformlist[i]. Modifytime + "', '" + rddformlist[i]. CreateBy + "', '" + rddformlist[i]. Modifyby + "'," + rddformlist[i]. FormType + ")"; Mysqlcommand mysqlcom = new Mysqlcommand (Cmdtext, Mysqlcon); Mysqlcom. Transaction = trans; Binds the transaction mysqlcom. ExecuteNonQuery (); }
after the method through the pro-test, performance is not very good, the local is probably a second insert20 data, the cloud is about a second INSERT3 data. 2000 data Local about a half a minute, the cloud about more than 10 points, obviously the speed is too slow.
Now we're going to add multiple data merge inserts, which is to insert all the data with an INSERT statement:
Save Rddform for (int i = 0; i < rddformlist. Count; i++) { strrddform.append ("('" + rddformlist[i]. Rddformguid + "', '" + rddformlist[i]. Createtime + "', '" + rddformlist[i]. Modifytime + "', '" + rddformlist[i]. CreateBy + "', '" + rddformlist[i]. Modifyby + "', '" + rddformlist[i]. FormType + "')"); if (i<rddformlist. Count-1) { strrddform.append (","); } } Mysqlcommand rddformcom = new Mysqlcommand (strrddform.tostring (), Mysqlcon); Rddformcom. Transaction = trans; Binds the transaction rddformcom. ExecuteNonQuery ();
Pro-Test after 2000 local really is the second plug, the cloud about a second more, this speed let people have no reason to refuse, decisively changed the code. Of course the speed may be based on personal machines and network environmental factors, the following is the performance of online cattle comparison
Precautions:
SQL statements are limited in length, and in the same SQL the data merge must not exceed the SQL length limit, which can be modified by the Max_allowed_packet configuration, which is 1M by default and modified to 8M when testing.
Transactions need to be controlled in size, and transactions are too large to affect the efficiency of execution. MySQL has innodb_log_buffer_size configuration items, more than this value will INNODB data to the disk, then the efficiency will be reduced. So it's a good idea to commit the transaction before the data reaches this value.
MySQL Database inset performance optimization