Mysql insert statement optimization summary

Source: Internet
Author: User
Tags commit create index datetime mysql insert

1) If you INSERT many rows from the same customer at the same time, use the INSERT statement of multiple value tables. This is faster than using separate INSERT statements (several times in some cases ).

The code is as follows: Copy code

Insert into test values (1, 2), (1, 3), (1, 4 )...

Insert multiple data records in one SQL statement.

Common insert statements are as follows:

The code is as follows: Copy code


Insert into 'Insert _ table' ('datetime', 'uid', 'content', 'type') VALUES ('0', 'userid _ 0 ', 'content _ 0', 0 );
Insert into 'Insert _ table' ('datetime', 'uid', 'content', 'type') VALUES ('1', 'userid _ 1 ', 'content _ 1', 1 );

Modify:

Insert into 'Insert _ table' ('datetime', 'uid', 'content', 'type') VALUES
('0', 'userid _ 0', 'content _ 0', 0), ('1', 'userid _ 1', 'content _ 1', 1 );


Java implementation:

The code is as follows: Copy code

Connection connection = DriverManager. getConnection ("jdbc: mysql: // localhost: 3306/test", "root", "123 ");
// Disable automatic submission. By default, each SQL statement is submitted once.
Connection. setAutoCommit (false );
PreparedStatement = connection. prepareStatement ("insert into insert_table VALUES (?, ?) ");
// Record 1
Statement. setString (1, "11:11:11 ");
Statement. setString (2, "userid_0 ");
Statement. setString (3, "content_0 ");
Statement. setInt (4, 0 );
Statement. addBatch ();
// Record 2
Statement. setString (1, "12:12:12 ");
Statement. setString (2, "userid_1 ");
Statement. setString (3, "content_1 ");
Statement. setInt (4, 1 );
Statement. addBatch ();
// Record 3
Statement. setString (1, "13:13:13 ");
Statement. setString (2, "userid_2 ");
Statement. setString (3, "content_2 ");
Statement. setInt (4, 2 );

Statement. addBatch ();
// Execute the preceding three statements in batches.
Int [] counts = statement.exe cuteBatch ();
// Commit
Connection. commit ();

The modified insert operation can improve the insert efficiency of the program. There are two main reasons for the high efficiency of the second SQL statement. One is to reduce the SQL statement parsing operation, and only one time can be parsed to insert data. The other is that the SQL statement is short, it can reduce the I/O of network transmission.


2) If you INSERT many rows from different customers, you can use the insert delayed statement to get a higher speed. The meaning of Delayed is to let the insert statement be executed immediately. In fact, the data is put in the memory queue and is not actually written to the disk; this is much faster than inserting each statement separately; LOW_PRIORITY, on the contrary, is inserted only after all other users read the table.

3) store index files and data files on different disks (using the options in table creation ).

4) If batch inserts are performed, you can increase the speed by adding the value of the bulk_insert_buffer_size variable. However, this can only be used for myisam tables.

5) when loading a table from a text file, load data infile is used. This is usually 20 times faster than using many INSERT statements.

6) replace insert with the replace statement as needed.

7) ignore duplicate records by using the ignore keyword based on application conditions.

8) locking a table can accelerate the INSERT operation executed with multiple statements:

The code is as follows: Copy code
Lock tables a WRITE;
Insert into a VALUES );
Insert into a VALUES (8, 26), (6, 29 );
Unlock tables;

This improves the performance because the index cache is refreshed to the disk only once after all INSERT statements are completed. Generally, the number of INSERT statements is the index cache refresh. If you can use one statement to insert all rows, you do not need to lock them.

For transaction TABLES, BEGIN and COMMIT should be used instead of lock tables to speed up insertion.


1. Analysis
Insert a row into the following actions, followed by the approximate proportion of the brackets
Connecting (3)
Sendint query to server (2)
Parsing query (2)
Inserting row (1 * size of row)
Inserting indexes (1 * number of indexes)
Closing (1)
The index insertion speed slows down with the table size.

2. Optimization methods
A. If a client needs to insert multiple data entries at a time, multiple values are used.
Insert into t1 values (...),(...),(...)
If you insert data to a non-empty table, you can adjust the bulk_insert_buffer_size (default: 8388608 bytes = 8 M)
B. If multiple clients insert many data records at the same time, use the insert delayed statement.
Benefit: The client returns immediately, and the data is lined up. The data is neatly written into a block, rather than scattered.
Disadvantage: if data is found and deleted in this table, insertion slows down. In addition, it takes some additional resources to start a handler thread for the table to process the data.
The data to be inserted is stored in the memory. Once the database is accidentally terminated (such as kill-9), the data will be lost.
This method is only applicable to myisam, memory, archive, and blackhole engine class tables.
Adjustable delayed_insert_limit (default value: 100 at a time)
Within delayed_insert_timeout (default: 300) seconds, if no new insert delayed statement exists, the handler thread exits.
If delayed_queue_size (1000 by default) is full, the insert delayed on the client will be blocked.
It is slower than the first method.
For Myisam, this method B is not required when Method c is available.
C. For Myisam tables, if data has not been deleted in the middle of a table, you can execute the insert statement to insert the data at the end of the file when executing the Select statement.
Concurrent_insert must be 1 (default is 1)
D. load data infile from a text file is generally 20 times faster than using an insert statement.
If the table has an index, you can remove the index first. After loading, add the index. It can increase the speed (disk seek can be reduced by creating indexes at the same time as load ).
This post-event index creation method is automatically executed when the msisam table is empty.
E. If you insert multiple statements, you can first lock tables t write, insert and then unlock tables (the index will only flush once); but if there is only one insert, then no.
F. To increase the load data and insert speed of the Myisam table, increase the key_buffer_size (8 MB by default)
If the machine has more than 256 MB of memory, you can set key_buffer_size to 64 MB, and table_open_cache to (64 by default)
If you have more than MB of memory, you can set key_buffer_size to 16 MB.

3. Test results:
Table t (id int auto_increment primary key, content1 varchar (30), content2 int );
Create index ind_of_t on t (content1 );

A. Insert a single empty table:
1 thousand logs, time consumed: 24 seconds
160 records, time consumed: Seconds
10 thousand messages, time consumed: 277 seconds

B. 1000 empty tables in one values list
1 thousand records, time consumed: 2 seconds
Records, time consumed: 6 seconds
10 thousand entries, time consumed: 11 seconds
50 thousand entries, time consumed: 51 seconds
0.1 million messages, time consumed: 99 seconds

C. 10 threads are inserted into the empty table at the same time, with 1000 values lists at a time
4, Begin...
8, Begin...
6, Begin...
0, Begin...
7, Begin...
9, Begin...
3, Begin...
5, Begin...
2, Begin...
Records, time consumed: 6 seconds
Records, time consumed: 5 seconds
Records, time consumed: 6 seconds
Records, time consumed: 7 seconds
Records, time consumed: 7 seconds
Records, time consumed: 7 seconds
Records, time consumed: 10 seconds
1, Begin...
Records, time consumed: 12 seconds
Entries, time consumed: 2 seconds
Records, time: 17 seconds
Records, time consumed: 27 seconds
Records, time consumed: 28 seconds
Records, time consumed: 30 seconds
Records, time consumed: 31 seconds
Records, time consumed: 34 seconds
Records, time consumed: 35 seconds
Records, time consumed: 36 seconds
Records, time consumed: 30 seconds
Records, time consumed: 46 seconds
Records, time consumed: 49 seconds
Records, time consumed: 49 seconds
Entries, time consumed: 60 seconds
Records, time consumed: 61 seconds
Lines, time consumed: 63 seconds
5 million records, time consumed: 65 seconds
Records, time consumed: 67 seconds
Million records, time consumed: 61 seconds
Records, time consumed: 79 seconds
Records, time consumed: 78 seconds
Records, time consumed: 84 seconds
275 records, time consumed: Seconds
285 million records, time consumed: Seconds
306, time consumed: Seconds
8, 314 entries, time consumed: Seconds
316, time consumed: Seconds
330 records, time consumed: Seconds
5 million records, time consumed: 351 seconds
364, time consumed: Seconds
377 records, time consumed: Seconds
403 records, time consumed: Seconds
6, 10 million records, time consumed: 552 seconds
6, End
558, time consumed: Seconds
0, End
573 million records, time consumed: Seconds
1, End
615 million records, time consumed: Seconds
4, End
3, 10 million records, time consumed: 615 seconds
3, End
5 million records, time consumed: 623 seconds
5, End
8, 10 million records, time consumed: 625 seconds
8, End
643 million records, time consumed: Seconds
7. End
9, 10 million records, time consumed: 648 seconds
9, End
654 million records, time consumed: Seconds
2, End

D. 10 threads insert tables (1 million records already exist) at the same time, 1000 values lists at a time, and 9 million more records
5, Begin... on 1236937010 seconds
1, Begin... on 1236937010 seconds
8, Begin... on 1236937010 seconds
4, Begin... on 1236937010 seconds
7, Begin... on 1236937010 seconds
2, Begin... on 1236937010 seconds
3, Begin... on 1236937010 seconds
9, Begin... on 1236937010 seconds
0, Begin... on 1236937011 seconds
6, Begin... on 1236937011 seconds
8, 10 million records, time consumed: 499 seconds
518, time consumed: Seconds
3, 10 million records, time consumed: 519 seconds
556 million records, time consumed: Seconds
9, 10 million records, time consumed: 565 seconds
578 million records, time consumed: Seconds
5 million records, time consumed: 654 seconds
709 million records, time consumed: Seconds
1006, time consumed: Seconds
9, 20 million records, time consumed: 1070 seconds
1091 records, time consumed: Seconds
8, 20 million records, time consumed: 1141 seconds
5, 20 million records, time consumed: 1146 seconds
1157, time consumed: Seconds
1185 records, time consumed: Seconds
1291 million records, time consumed: Seconds
0, 30 million records, time consumed: 1510 seconds
3, 30 million records, time consumed: 1616 seconds
9, 30 million records, time consumed: 1649 seconds
1690 million records, time consumed: Seconds
8, 30 million records, time consumed: 1701 seconds
5, 30 million records, time consumed: 1767 seconds
1778 million records, time consumed: Seconds
1898, time consumed: Seconds
2066, time consumed: Seconds
3, 40 million records, time consumed: 2109 seconds
8, 40 million records, time consumed: 2197 seconds
9, 40 million records, time consumed: 2213 seconds
2235 million records, time consumed: Seconds
5 million records, time consumed: 2266 seconds
0, 50 million records, time consumed: 2461 seconds
3, 50 million records, time consumed: 2502 seconds
2607 records, time consumed: Seconds
2655 million records, time consumed: Seconds
8, 50 million records, time consumed: 2663 seconds
5 million records, time consumed: 2739 seconds
2876 million records, time consumed: Seconds
2921, time consumed: Seconds
3055 million records, time consumed: Seconds
8, 60 million records, time consumed: 3101 seconds
5, 60 million records, time consumed: 3178 seconds
3201 records, time consumed: Seconds
3312 million records, time consumed: Seconds
0, 70 million records, time consumed: 3358 seconds
3437 million records, time consumed: Seconds
8, 70 million records, time consumed: 3523 seconds
3645 records, time consumed: Seconds
3694 million records, time consumed: Seconds
3731 million records, time consumed: Seconds
0, 80 million records, time consumed: 3799 seconds
8, 80 million records, time consumed: 3906 seconds
3915 million records, time consumed: Seconds
4062 million records, time consumed: Seconds
4101 million records, time consumed: Seconds
3, End
0, 90 million records, time consumed: 4209 seconds
0, End
8, 90 million records, time consumed: 4227 seconds
8, End
4241 million records, time consumed: Seconds
1, End
5, 90 million records, time consumed: 4288 seconds
5, End

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.