As we all know, SQLite is a lightweight database that only needs an EXE file to run. On the local data, I prefer to use it, not only because he has a similar syntax to SQL Server, but also because it doesn't need to be installed, it needs to be started by the command line, and he is much better at dealing with big data than SQL Server, so here's not going to argue about performance.
First of all, the problem is encountered in a project, the project requires a large number of import data, and because of the nature of the project (light, localized), so I choose SQLite to store data.
First edition code:
1StringBuilder sql =NewStringBuilder ();2 foreach(DataRow Drinchdt. Rows)3 {4Sql. Append ("INSERT into Info (name,code) VALUES ('"+dr[0]+"', '"+dr[1]+"') \ r \ n"); 5 }6SqlHelper.SqliteHelper.ExecuteNonQuery (SQL. ToString (), CommandType.Text);
View Code
From the above code, I did not do any optimization work, I inserted 10,000 data in this way for more than 1 minutes, visibility of the importance of optimization (millions data run up ... Dare not imagine).
To open a transaction:
1StringBuilder sql =NewStringBuilder ();2Sql. Append ("BEGIN;");//Open Transaction3 foreach(DataRow Drinchdt. Rows)4 {5Sql. Append ("INSERT into Info (name,code) VALUES ('"+dr[0]+"', '"+dr[1]+"') \ r \ n"); 6 }7Sql. Append ("COMMIT;");//Commit a transaction8SqlHelper.SqliteHelper.ExecuteNonQuery (SQL. ToString (), CommandType.Text);
View Code
Since SQLite's data operations are essentially IO operations for its data files, frequent data insertions can cause file IO to open and close, very lossy performance. The role of the transaction is to make the data slow to exist in the system, commit the transaction commits all changes to the data file, the data file IO only need to open and close once, and avoid the long-term occupation of file IO caused by poor performance. At this point, although the performance of the transaction has been greatly improved, but the results are still not ideal, what is the impact of performance? The following attempts to turn on preprocessing to resolve the issue.
Open preprocessing:
1 Public Static intExecuteNonQuery (stringCommandText, CommandType commandtype)2 {3 intresult =0;4 if(CommandText = =NULL|| Commandtext.length = =0)5 Throw NewArgumentNullException ("SQL statement is empty");6 using(Sqliteconnection con =Newsqliteconnection (connectionString))7 {8Sqlitecommand cmd =NewSqlitecommand ();9Cmd. Connection =con;TenCmd.commandtext =CommandText; OneCmd. Prepare ();//Turn on preprocessing A Try - { -result =cmd. ExecuteNonQuery (); the } - Catch(Exception ex) - { - Throwex; + } - } + returnresult; A } at
View Code
The principle of preprocessing is to pre-compile a statement to the database, the next time you execute the same statement, you do not have to compile again, saving a lot of time. As a result, the code does not seem to have been optimized for completion. In the case of the first version of the code, because the bulk insert data is not the same, so the database will compile the INSERT statement multiple times, the performance will be very much loss, it will cause more time to insert. Is there a single statement that can be used in a circular way? The answer is yes, using parameterized values, you can make every SQL statement that is inserted the same.
Parameterization (Database helper Class I won't write it):
1StringBuilder sql =NewStringBuilder ();2sqliteparameter[] sp =Newsqliteparameter[2];3 foreach(DataRow Drinchdt. Rows)4 {5 SQL. Clear ();6Sql. Append ("INSERT into Info (name,code) VALUES (@Name, @Code) \ r \ n"); 7sps[0] =NewSqliteparameter ("@p1", dr["Name"]);8sps[1] =NewSqliteparameter ("@p2", dr["Code"]);9 sqlHelper.SqliteHelper.ExecuteNonQuery (SQL. ToString (), SP, commandtype.text); Ten}
View Code
After a series of optimized processing, it takes less than 1 seconds to insert 10,000 data, and the performance has been greatly improved.
In addition, I novice, if there is wrong, look feel free.
SQLite Big Data Write performance optimization