MySQL database insertion speed adjustment

Source: Internet
Author: User


The MySQL database insertion speed is adjusted due to abnormal project requirements. database records need to be rapidly increased in a short period of time (due to the increase of 0 to 0.4 billion in two or three days ). The optimization process is very difficult throughout the process. MySQL increased the insert speed from 1000 at the beginning to 10000 per minute. I believe everyone is waiting for a brief introduction. The whole process of optimization is as follows. Central idea of improving database insertion performance: 1. Try to make the database write Data File www.2cto.com at one time; 2. Reduce database checkpoint operations; 3. Buffer Data as much as possible in the program, batch insert and commit 4. Reduce system IO conflicts according to the above four points, as an amateur DBA made the following adjustments to the MySQL service: modify the configuration of the MySQL server that is responsible for recording records, improve the overall write speed of MySQL. The following three database variable values are: innodb_autoextend_increment, innodb_log_buffer_size, and innodb_log_file_size. The default values of these three variables are 5 M, 8 M, and 8 M, respectively, modify the memory size and usage of the server to 128 MB, 16 MB, and 128 MB. At the same time, the original two Log files are also changed to eight Log files. This modification mainly meets the first and second requirements. For example, innodb_autoextend_increment is added to avoid MySQL checkpoint operations due to frequent automatic expansion of Data files. Large tables are converted into independent tables with null values and partitions, then, different partitions are attached to multiple hard disk arrays. After www.2cto.com completes the above modification, I can see the following Happy results: get the test results: Query OK, 2500000 rows affected (4 min 4.85 sec) Records: 2500000 Duplicates: 0 Warnings: 0 Query OK, 2500000 rows affected (4 min 58.89 sec) Records: 2500000 Duplicates: 0 Warnings: 0 Query OK, 2500000 rows affected (5 min 25.91 sec) Records: 2500000 Duplicates: 0 Warnings: 0 Query OK, 2500000 rows affected (5 min 22.32 sec) www.2cto.com Records: 2500000 Dup Licates: 0 Warnings: 0 the data size of the last table: + ------------ + | count (*) | + ------------ + | 10000000 | + ------------ + from the above results, an increase in data volume may affect insertion performance. However, the overall speed is still very negotiable. In less than one day, 0.4 billion pieces of data can be processed normally. It is expected that the database bottleneck has been cleverly solved, and the result turns into a program "", complaining to me hard, Big Brother don't have to be so embarrassed. Author Mark. Liang

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.