MySQL batch SQL insert Performance Optimization

Source: Internet
Author: User
Welcome to the Linux community forum, and interact with 2 million technical staff. The following shows the performance comparison between random data and ordered data, which are recorded as 10 thousand, 0.1 million, 1 million, and respectively. From the test results, the performance of the optimization method has been improved, but the improvement is not very obvious. Comprehensive Performance Test: The above three methods are provided at the same time.

Welcome to the Linux community forum and interact with 2 million technical staff> enter the following table to compare the performance of random data with sequential data, the records are 10 thousand, 0.1 million, 1 million, and respectively. From the test results, the performance of the optimization method has been improved, but the improvement is not very obvious. Comprehensive Performance Test: The above three methods are provided at the same time.

Welcome to the Linux community forum and interact with 2 million technicians>

The following describes the performance comparison between random data and ordered data, which are recorded as 10 thousand, 0.1 million, 1 million, and respectively.

From the test results, the performance of the optimization method has been improved, but the improvement is not very obvious.

Comprehensive Performance test:

Here we provide a test to optimize the INSERT efficiency by using the above three methods at the same time.

From the test results, we can see that when the data merging + transaction method has a small amount of data, the performance improvement is obvious. When the data volume is large (more than 10 million), the performance will drop sharply, this is because the data volume exceeds the innodb_buffer capacity at this time. Each index location involves a large number of disk read/write operations, and the performance decreases rapidly. However, the use of combined data + transactions + ordered data still performs well when the data volume reaches more than 10 million levels. When the data volume is large, it is easier to locate ordered data indexes, you do not need to perform read/write operations on disks frequently, so you can maintain high performance.

Note:

1. the SQL statement has a length limit. when data is merged in the same SQL statement, the length limit cannot be exceeded. The max_allowed_packet configuration can be modified. The default value is 1 MB and the value is 8 Mb.

2. The transaction size needs to be controlled. Too large a transaction may affect the execution efficiency. MySQL has the innodb_log_buffer_size configuration item. If this value is exceeded, innodb data will be flushed to the disk, and the efficiency will decrease. Therefore, it is better to commit a transaction before the data reaches this value.

[1] [2]

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.