160304, MySQL database insertion speed and reading speed adjustment record

Source: Internet
Author: User
Tags mysql view server memory

Requirements: Due to the project metamorphosis, it is necessary to dramatically increase the database record in a relatively short period of time (two or three days, due to 0 increase to 500 million). During the whole process tuning process is very difficult

Ideas:

   (1) Improve database Insert performance Central idea: Try to write the data to the file and reduce the checkpoint operation of the database once. The following four configuration items have been modified this time:
1) Set the Innodb_flush_log_at_trx_commit configuration to 0, with the previous experience set to 0, the insertion speed will be greatly improved.

0:write the log buffer to the log file and flush the log file every second, but does nothing at transaction commit.
1:the log buffer is written out to the log file at each transaction commits and the flush to disk operation are performed on The log file
2:the log buffer is written out to the file at each commit, but the "flush to disk operation are not performed on it
2) Adjust the innodb_autoextend_increment configuration to 128M due to the default 8M

The main purpose of this configuration item is that when the tablespace space is full, there is a need for the MySQL system to automatically expand how much space, each time the tablespace extension will let each SQL wait state. Increasing the auto-scaling size reduces the number of tablespace auto-expansions.

3) Adjust the innodb_log_buffer_size configuration to 16M due to the default 1M

This configuration item sets the INNODB database engine to write log buffers; Increasing this cache segment reduces the number of database write data files.

4) Adjust the innodb_log_file_size configuration to 128M due to the default 8M

This configuration item sets the INNODB database engine to the size of the undo log, thereby reducing database checkpoint operations.

After the above adjustment, the system insertion speed due to the original 10 minutes tens of thousands of to 1 seconds and about 1W; Note: The above parameter adjustment, according to different machines to make the actual adjustment. In particular, Innodb_flush_log_at_trx_commit, innodb_log_buffer_size and innodb_log_file_size need to be carefully adjusted because of the disaster recovery process involved in MySQL itself.

(2) Improve database read Speed, the increase in read speed at the heavy database level is mainly due to several points: simplifying SQL, indexing, and partitioning; the check program SQL is already the simplest and the index has been increased on the query condition. We can only use weapons: Table partitioning.

Database MySQL partition preparation: In MySQL, a tablespace is a data file that stores data and indexes.
Modify the S11 database to support multiple tablespace as a result of sharing tablespace;

Modify the Wb_user_info_sina and wb_user_info_tx two tables to their respective independent table spaces (sina:1700w data, 2.6G Big data files, Tencent 1400w,2.3g Big Data files);
Partitioning operations:
Remove the existing primary key and index first
Reproduce the Federated primary key that establishes the Id,uid
Then partition with the UID as the key value. At this time to/var/data/mysql view data files, you can see the two large tables of their own separate table space has been divided into several less independent partition space. (This time if the query with the UID for the search condition, does not increase the speed, because the key value is only the partition that arranges the data store and does not establish the partition index.) I was so depressed that it was not 1:30 to the point than Oracle. )
The index is then indexed on the UID field. Again to the/var/data/mysql folder to view the data file, very depressed to find that each partition size unexpectedly large. MySQL still looks the same. Stores the index and data in the same tablespace. If the index and data separation can be better managed.

After the above adjustment, temporarily failed to reflect the system read speed increase, basically in 2-3 seconds to complete the 5K data update.

MySQL Database Insert speed adjustment supplemental information:

MySQL starts at the beginning of the 1000/minute insertion speed up to 10,000 bar/second. I believe everyone has been waiting for the relevant introduction, the following I do tuning the whole process. The central idea of improving database insertion performance:
1. Try to make the database write to data File once
2, reduce the checkpoint operation of the database
3, the program to buffer data as far as possible, batch-type insertion and submission
4, reduce the system IO conflict

Based on the above four points, as an amateur DBA, the following adjustments are made to the MySQL service:
Modify responsible for recording MySQL server configuration, improve the overall write speed of MySQL, the following three database variable values: innodb_autoextend_increment, Innodb_log_buffer_size, Innodb_log_ File_size; The default values for these three variables are 5M, 8M, 8M, respectively, according to the size and usage of server memory, this three is modified to: 128M, 16M, 128M respectively. At the same time, the original 2 log file was changed to 8 log file. This modification mainly satisfies the first and 2nd, such as: increase innodb_autoextend_increment is to avoid due to the frequent automatic extension of data file, resulting in MySQL checkpoint operation;
Convert large tables to stand-alone table empty and partition, and then hang different partitions under multiple different hard disk arrays.

After completing the above modifications, I see the following happy results:

Get test results:
Query OK, 2500000 rows affected (4 min 4.85 sec)
records:2500000 duplicates:0 warnings:0
Query OK, 2500000 rows affected (4 min 58.89 sec)
records:2500000 duplicates:0 warnings:0
Query OK, 2500000 rows affected (5 min 25.91 sec)
records:2500000 duplicates:0 warnings:0
Query OK, 2500000 rows affected (5 min 22.32 sec)

records:2500000 duplicates:0 warnings:0
The amount of data in the last table:
+------------+
| COUNT (*) |
+------------+
| 10000000|
+------------+
From the above results, the increase in data volume will have a certain effect on insert performance. However, the overall speed is still very negotiable. Less than a day, you can complete 400 million of data normal processing. It is expected that the database bottleneck has been cleverly solved!

160304, MySQL database insertion speed and reading speed adjustment record

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.