Instance details MySQL database performance tuning method in Linux

Source: Internet
Author: User
The following environments are representative of the general configuration and work environment of small and medium-sized enterprises. I hope this article will help you understand the MySQL database performance tuning method in Linux. 51CTO recommends the MySQL database getting started and proficient tutorial to you. Hardware preparation environment: Hard Drive: 16 SAS15KRAID5 with 512 MCacheCPU: Two AMD4 Cores

The following environments are representative of the general configuration and work environment of small and medium-sized enterprises. I hope this article will help you understand the MySQL database performance tuning method in Linux. 51CTO recommends the MySQL database getting started and proficient tutorial to you. Hardware preparation environment: Hard Disk: 16 SAS 15 k raid 5 with 512 MCache CPU: AMD 4-core two

The following environments are representative of the general configuration and work environment of small and medium-sized enterprises. I hope this article will help you understand the MySQL database performance tuning method in Linux. 51CTO recommends the MySQL database getting started and proficient tutorial to you.

Hardware preparation environment:

Hard drive: 16 SAS 15 k raid 5 with 512 MCache

CPU: AMD 4-core two

Memory: 16 GB

Software environment:

Operating System: RedHat AS4.6 2.6.9-67. ELsmp MySQL: 5.0

Performance Tuning based on different scenarios is as follows:

Scenario A: write operations (insert, query, and delete) are intensive, and the data capacity far exceeds the hardware preparation environment:

Hard drive: 16 SAS 15 k raid 5 with 512 MCache

CPU: AMD 4-core two

Memory: 16 GB

Software Environment

Operating System: RedHat AS4.6 2.6.9-67. ELsmp

MySQL: 5.0.

Performance Tuning based on different scenarios is as follows:

Scenario:

Write operations (insert, query, and delete) are intensive, and the data capacity far exceeds the memory size (minimum is more than GB, or even greater than TB). Transaction operations in thousands of units per second are required.

Data reliability requirements are not high, and several records can be lost (in case of hardware failure)

Optimization Method:

In this case, the Cache hit rate of the database is basically very low, and the operation on the database is basically to operate on the disk, so the focus should be on disk-related optimization.

First, consider the database storage engine. because it involves insert, delete, query, and other operations and must support transactions, consider using the InnoDB Storage engine, the reason for not considering MyIsam is that the storage engine is a table-Level Lock and it is very slow to delete data.

Set the appropriate MySQL parameter. Because there is 16 GB memory, the parameter settings are as follows:

 
 
  1. innodb_buffer_pool_size=8G
  2. innodb_additional_mem_pool_size = 800M,
  3. innodb_log_buffer_size = 4M
  4. innodb_log_file_size = 500M
  5. innodb_log_files_in_group = 4
  6. log-bin

Because master-slave replication is used, you need to enable Log-Bin. This involves the concept of sync-bin. By default, sync-bin is disabled in MySQL, however, there is a risk that if the write log is not refreshed to the hard disk, the log is written in the Cache of the file system of the operating system. In this way, if the machine is powered off, the log information will be partially lost. To reduce data loss, we tested sync-bin = 1, sync-bin = 5, and sync-bin = 20. In three cases, sync-bin = 1, the worst probability of data loss is that the data of a transaction is lost, but the performance is very poor. When sync-bin is set to 5, the performance can be improved by about doubled; when sync-bin = 20 is set, the performance can be doubled than sync-bin = 5. If the value is set to be greater than 20, the difference is not big, and the performance cannot be improved much, the worst 20 transaction data loss is within our permitted range, so setting 20 is a reasonable value, this value depends on whether your system can throw less transaction data at the worst.

If your system does not care about data loss, you can disable sync-bin. In this case, data refresh to the hard disk depends on the operating system configuration. The related configuration parameters are as follows:

 
 
  1. /proc/sys/vm/dirty_ratio

This parameter controls the size of the file system write buffer of a process in the file system. The unit is the percentage, indicating the percentage of system memory, indicates the amount of system memory used by the write buffer in a process. When there is another disk write operation, data is written to the disk. By increasing the size, you can use more system memory for disk write buffering and greatly improve the system write performance. However, when you need continuous and constant write admission, you should reduce the value by 40 by default. Update Method

Echo 30>/proc/sys/vm/dirty_ratio (or modify the/etc/sysctl. conf file and add sys. vm. dirty_ratio = 30 to restart the machine)

 
 
  1. /proc/sys/vm/dirty_background_ratio

This parameter controls the pdflush process of the file system and when to refresh the disk. The Unit is the percentage, which indicates the percentage of the total memory of the system. This means that when the dirty data on the disk is buffered to the system memory, pdflush begins to refresh the dirty data to the disk. Increasing the memory size of multiple systems will be used for disk write buffering, which can also greatly improve the write performance of the system. However, when you need continuous and constant writing, you should reduce the value. The default value is 10.

 
 
  1. /proc/sys/vm/dirty_writeback_centisecs

Pdflush writes the background process to wake up at intervals and writes dirty data to the hard disk. Unit: 1/100 seconds. The default value is 500, that is, 5 seconds. If your system writes data continuously, it is better to reduce the value. In this way, you can split the peak write operation into multiple write operations. The setting method is as follows:

 
 
  1. echo 200 >/proc/sys/vm/dirty_writeback_centisecs
  2. /proc/sys/vm/dirty_expire_centisecs

This parameter declares that the dirty data in the write buffer in the Linux kernel is "old", and the pdflush process begins to consider writing the data to the disk. Unit: 1/100 seconds. The default value is 30000, that is, if the data of 30 seconds is old, the disk will be refreshed. For write operations with special overloading, it is also good to narrow down this value, but it cannot be reduced too much, because too much will lead to IO increase too quickly. We recommend that you set it to 1500, that is, 15 seconds is counted as old.

 
 
  1. echo 1500 >/proc/sys/vm/ dirty_expire_centisecs

Therefore, if these parameters are not adjusted by default and sync-bin is disabled, the most lost data is:

Data smaller than 1.6 GB within 5 seconds (dirty_writeback_centisecs) (dirty_background_ratio, 16G/10 = G)

Of course, in fact, GB of data is unlikely to be written within five seconds, so the worst is data loss within five seconds. Therefore, if you want to disable sync-bin and do not want to lose too much data, you can adjust the dirty_writeback_centisecs parameter, for example, to 200 (2 seconds), so that you can lose up to 2 seconds of data. It can also improve data writing capabilities.

Another parameter in MySQL can be adjusted to improve the write capability of the database, that is

 
 
  1. innodb_flush_log_at_trx_commit

The default value of this parameter is 1, that is, logs are refreshed every time a transaction is committed to avoid data loss. Because our system allows the loss of a small amount of data, we can set innodb_flush_log_at_trx_commit to 2 to allow the loss of data from a transaction. After testing, we found that 2 can improve the performance by about 25%.

In addition, the noatime method can also improve part of the performance of the file system (dedicated database servers, generally noatime)

After the data has been deleted and updated, there are usually fragments after a long time. As a result, the index space is not compact and more hard disk space is occupied, resulting in query encoding, the solution is to execute the following statements on a regular basis:

 
 
  1. ALTER TABLE tbl_name ENGINE=INNODB

In addition, if the SQL statements include sort and group by, you need to increase sort_buffer_size.

This parameter is connected to each client. When there is a sort/group query, sort_buffer_size memory will be allocated. Therefore, if there are many connections, be careful; you can view Sort_merge_passes and Created_tmp_tables in show global status.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.