MySQL tuning and use must-read

Source: Internet
Author: User

MySQL tuning and use must-read

MySQL should be the most popular WEB backend database. WEB development languages have developed rapidly recently. PHP, Ruby, Python, and Java have their own characteristics. Although NOSQL has been mentioned more recently, I believe most architects will choose MySQL for data storage.

MySQL is so convenient and stable that we seldom think of it when developing WEB programs. Even if optimization is thought of, it is program-level. For example, Do not write SQL statements that consume too much resources. However, there are still many optimizations in the entire system.

1. select an appropriate storage engine: InnoDB

Unless your data table is used for read-only or full-text retrieval (I believe that MYSQL is no longer used for full-text retrieval), you should select InnoDB by default.

During the test, you may find that MyISAM is faster than InnoDB because MyISAM only caches indexes, while InnoDB caches data and indexes. MyISAM does not support transactions. However, if you useinnodb_flush_log_at_trx_commit = 2 Similar read performance can be achieved (a hundred times different ).

1.1 how to convert an existing MyISAM database to InnoDB:
  1. mysql -u [USER_NAME]-p -e "SHOW TABLES IN [DATABASE_NAME];"| tail -n +2| xargs -I '{}' echo "ALTER TABLE {} ENGINE=InnoDB;"> alter_table.sql
  2. perl -p -i -e 's/(search_[a-z_]+ ENGINE=)InnoDB/\1MyISAM/g' alter_table.sql
  3. mysql -u [USER_NAME]-p [DATABASE_NAME]< alter_table.sql
1.2 create an InnoDB FILE for each table:
  1. innodb_file_per_table=1

This ensures that the ibdata1 file is not too large and out of control. Especially in executionmysqlcheck -o –all-databases.

2. Ensure that the data is read from the memory, and that the data is saved in the memory. innodb_buffer_pool_size is 2.1 large enough.

We recommend that you store all the data ininnodb_buffer_pool_sizeBy storage capacityinnodb_buffer_pool_sizeCapacity. In this way, you can completely read data from the memory to minimize disk operations.

2.1.1 how do I determine whether innodb_buffer_pool_size is large enough to read data from memory rather than hard disks?

Method 1

  1. mysql> SHOW GLOBAL STATUS LIKE 'innodb_buffer_pool_pages_%';
  2. +----------------------------------+--------+
  3. |Variable_name|Value|
  4. +----------------------------------+--------+
  5. |Innodb_buffer_pool_pages_data|129037|
  6. |Innodb_buffer_pool_pages_dirty|362|
  7. |Innodb_buffer_pool_pages_flushed|9998|
  8. |Innodb_buffer_pool_pages_free|0|!!!!!!!!
  9. |Innodb_buffer_pool_pages_misc|2035|
  10. |Innodb_buffer_pool_pages_total|131072|
  11. +----------------------------------+--------+
  12. 6 rows inset(0.00 sec)

FoundInnodb_buffer_pool_pages_free0 indicates that the buffer pool has been used up and needs to be increased. innodb_buffer_pool_size

Other parameters of InnoDB:

  1. innodb_additional_mem_pool_size =1/200 of buffer_pool
  2. innodb_max_dirty_pages_pct 80%

Method 2

Or useiostat -d -x -k 1Command to view disk operations.

2.1.2 whether the server has enough memory for planning

Runecho 1 > /proc/sys/vm/drop_cachesClear the file cache of the operating system to see the actual memory usage.

2.2 Data push

By default, only one data entry is read once. innodb_buffer_pool. Therefore, when the database is just started, data needs to be preheated to cache all data on the disk to the memory. Data push can increase the read speed.

For InnoDB databases, use the following method to push data:

1. Save the following scriptMakeSelectQueriesToLoad.sql

  1. SELECT DISTINCT
  2. CONCAT('SELECT ',ndxcollist,' FROM ',db,'.',tb,
  3. ' ORDER BY ',ndxcollist,';')SelectQueryToLoadCache
  4. FROM
  5. (
  6. SELECT
  7. engine,table_schema db,table_name tb,
  8. index_name,GROUP_CONCAT(column_name ORDER BY seq_in_index) ndxcollist
  9. FROM
  10. (
  11. SELECT
  12. B.engine,A.table_schema,A.table_name,
  13. A.index_name,A.column_name,A.seq_in_index
  14. FROM
  15. information_schema.statistics A INNER JOIN
  16. (
  17. SELECT engine,table_schema,table_name
  18. FROM information_schema.tables WHERE
  19. engine='InnoDB'
  20. ) B USING (table_schema,table_name)
  21. WHERE B.table_schema NOT IN ('information_schema','mysql')
  22. ORDER BY table_schema,table_name,index_name,seq_in_index
  23. ) A
  24. GROUP BY table_schema,table_name,index_name
  25. ) AA
  26. ORDER BY db,tb
  27. ;

2. Run

  1. mysql -uroot -AN </root/MakeSelectQueriesToLoad.sql >/root/SelectQueriesToLoad.sql

3. Execute the following command every time you restart the database or push data before the full-Database Backup:

  1. mysql -uroot </root/SelectQueriesToLoad.sql >/dev/null2>&1
2.3 do not store data in SWAP

If it is a dedicated MYSQL server, you can disable SWAP. If it is a shared server, OKinnodb_buffer_pool_sizeLarge enough. Or use a fixed memory space for caching.memlockCommand.

3. Regular optimization and database Reconstruction

mysqlcheck -o –all-databasesThis will increase ibdata1. The true optimization is only to reconstruct the data table structure:

  1. CREATE TABLE mydb.mytablenew LIKE mydb.mytable;
  2. INSERT INTO mydb.mytablenew SELECT * FROM mydb.mytable;
  3. ALTER TABLE mydb.mytable RENAME mydb.mytablezap;
  4. ALTER TABLE mydb.mytablenew RENAME mydb.mytable;
  5. DROP TABLE mydb.mytablezap;
4. Reduce disk write operations by 4.1. Use the innodb_log_file_size write cache that is large enough

However, you must note thatinnodb_log_file_sizeIf the server is on the machine, it will take 10 minutes to recover.

Recommendationinnodb_log_file_sizeSet to 0.25 *innodb_buffer_pool_size

4.2 innodb_flush_log_at_trx_commit

This option is closely related to disk write operations:

innodb_flush_log_at_trx_commit = 1Then, each modification is written to the disk.
innodb_flush_log_at_trx_commit = 0/2Write to disk per second

If your application does not involve high security (financial system), or the infrastructure is secure enough, or the transaction is very small, you can use 0 or 2 to reduce disk operations.

4.3 avoid double write Buffering
  1. innodb_flush_method=O_DIRECT
5. Improve disk read/write speed

RAID0, especially when using a virtual disk (EBS) such as EC2, is very important to use soft RAID 0.

6. Make full use of index 6.1 to view the existing table structure and Index
  1. SHOW CREATE TABLE db1.tb1\G

6.2 add necessary Indexes

Indexes are the only way to increase the query speed. For example, inverted indexes are the same principle for search engines.

The index must be added based on the query. For example, you can use the slow query log or query log, or use the EXPLAIN command to analyze and query the index.

  1. ADD UNIQUE INDEX
  2. ADD INDEX
6.2.1 for example, optimize the user verification table:

Add Index

  1. ALTER TABLE users ADD UNIQUE INDEX username_ndx (username);
  2. ALTER TABLE users ADD UNIQUE INDEX username_password_ndx (username,password);

Data is pushed every time the server is restarted.

  1. echo “select username,password from users;”>/var/lib/mysql/upcache.sql

Add the startup script to my. cnf

  1. [mysqld]
  2. init-file=/var/lib/mysql/upcache.sql
6.2.2 use an automatic index adding framework or an automatic table sharding framework

For example, a framework like Rails will automatically add indexes, while a framework like Drupal will automatically split the table structure. It will indicate the correct direction at the beginning of your development. Therefore, it is not a good practice for people with less experience to start from 0.

7. Analyze query logs and slow query logs

Record all queries, which is useful in the ORM system or the system that generates query statements.

  1. log=/var/log/mysql.log

Be sure not to use it in the production environment, otherwise it will occupy your disk space.

Query records that have been executed for more than 1 second:

  1. long_query_time=1
  2. log-slow-queries=/var/log/mysql/log-slow-queries.log
8. Use memory disks in a radical way

Now the infrastructure reliability is very high. For example, EC2 has almost no need to worry about server hardware as a machine. In addition, the memory is really cheap, it is easy to buy dozens of GB memory servers, you can use the memory disk, regular backup to the disk.

Migrate the MYSQL directory to a 4 GB memory disk

  1. mkdir -p /mnt/ramdisk
  2. sudo mount -t tmpfs -o size=4000M tmpfs /mnt/ramdisk/
  3. mv /var/lib/mysql /mnt/ramdisk/mysql
  4. ln -s /tmp/ramdisk/mysql /var/lib/mysql
  5. chown mysql:mysql mysql
9. Use MYSQL using NOSQL

B-TREE is still one of the most efficient indexes, and all MYSQL is still not outdated.

Using HandlerSocket to skip the SQL parsing layer of MYSQL, MYSQL becomes NOSQL.

10. Others
  • LIMIT 1 is added at the end of a single query to stop full table scanning.
  • Separate non-indexed data, such as storing large articles without affecting other automatic queries.
  • MYSQL built-in functions are not required, because built-in functions do not create query caches.
  • The connection speed of PHP is very fast, so you can skip the connection pool. Otherwise, the number of connections may exceed. Of course, the PHP program in the connection pool may also
  • If the number of connections is used up@ignore_user_abort(TRUE);
  • Use IP address instead of domain name as the database path to avoid DNS resolution Problems
11. End

You will find that the database performance is improved several times to several hundred times after optimization. Therefore, MYSQL is applicable to most scenarios. The cost of optimizing the existing system is much lower than that of restructuring the system or migrating it to NOSQL.

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.