10 MySQL Performance tuning methods _ MySQL

Source: Internet
Author: User
MYSQL should be the most popular WEB backend database. WEB development languages have developed rapidly recently. PHP, Ruby, Python, and Java have their own characteristics. although NOSQL has been mentioned more recently, I believe most architects will choose MYSQL for data storage. MYSQL should be the most popular WEB backend database. WEB development languages have developed rapidly recently. PHP, Ruby, Python, and Java have their own characteristics. although NOSQL has been mentioned more recently, I believe most architects will choose MYSQL for data storage.

MYSQL is so convenient and stable that we seldom think of it when developing WEB programs. Even if optimization is thought of, it is program-level. for example, do not write SQL statements that consume too much resources. However, there are still many optimizations in the entire system.

1. select an appropriate storage engine: InnoDB

Unless your data table is used for read-only or full-text retrieval (I believe that MYSQL is no longer used for full-text retrieval), you should select InnoDB by default.

During the test, you may find that MyISAM is faster than InnoDB because MyISAM only caches indexes, while InnoDB caches data and indexes. MyISAM does not support transactions. However, if you use innodb_flush_log_at_trx_commit = 2, you can achieve close read performance (a hundred times different ).

1.1 How to convert an existing MyISAM database to InnoDB:
mysql -u [USER_NAME] -p -e "SHOW TABLES IN [DATABASE_NAME];" | tail -n +2 | xargs -I '{}' echo "ALTER TABLE {} ENGINE=InnoDB;" > alter_table.sqlperl -p -i -e 's/(search_[a-z_]+ ENGINE=)InnoDB//1MyISAM/g' alter_table.sqlmysql -u [USER_NAME] -p [DATABASE_NAME] < alter_table.sql
1.2 Create an InnoDB FILE for each table:
innodb_file_per_table=1

This ensures that the ibdata1 file is not too large and out of control. Especially when executing mysqlcheck-o-all-databases.

2. ensure that the data is read from the memory, and that the data is saved in the memory. innodb_buffer_pool_size is 2.1 large enough.

We recommend that you fully store the data in innodb_buffer_pool_size, that is, plan the innodb_buffer_pool_size capacity based on the storage capacity. In this way, you can completely read data from the memory to minimize disk operations.

2.1.1 How do I determine whether innodb_buffer_pool_size is large enough to read data from memory rather than hard disks?

Method 1

mysql> SHOW GLOBAL STATUS LIKE 'innodb_buffer_pool_pages_%';+----------------------------------+--------+| Variable_name                    | Value  |+----------------------------------+--------+| Innodb_buffer_pool_pages_data    | 129037 || Innodb_buffer_pool_pages_dirty   | 362    || Innodb_buffer_pool_pages_flushed | 9998   || Innodb_buffer_pool_pages_free    | 0      |  !!!!!!!!| Innodb_buffer_pool_pages_misc    | 2035   || Innodb_buffer_pool_pages_total   | 131072 |+----------------------------------+--------+6 rows in set (0.00 sec)

If Innodb_buffer_pool_pages_free is 0, the buffer pool has been used up and innodb_buffer_pool_size needs to be increased.

Other parameters of InnoDB:

innodb_additional_mem_pool_size = 1/200 of buffer_poolinnodb_max_dirty_pages_pct 80%

Method 2

You can also run the iostat-d-x-k 1 Command to View disk operations.

2.1.2 whether the server has enough memory for planning

Run echo 1>/proc/sys/vm/drop_caches to clear the file cache of the operating system. the actual memory usage is displayed.

2.2 Data Push

By default, data is cached in innodb_buffer_pool only once. Therefore, when the database is just started, data needs to be preheated to cache all data on the disk to the memory. Data push can increase the read speed.

For InnoDB databases, use the following method to push data:

1. Save The following script as MakeSelectQueriesToLoad. SQL

SELECT DISTINCT    CONCAT('SELECT ',ndxcollist,' FROM ',db,'.',tb,    ' ORDER BY ',ndxcollist,';') SelectQueryToLoadCache    FROM    (        SELECT            engine,table_schema db,table_name tb,            index_name,GROUP_CONCAT(column_name ORDER BY seq_in_index) ndxcollist        FROM        (            SELECT                B.engine,A.table_schema,A.table_name,                A.index_name,A.column_name,A.seq_in_index            FROM                information_schema.statistics A INNER JOIN                (                    SELECT engine,table_schema,table_name                    FROM information_schema.tables WHERE                    engine='InnoDB'                ) B USING (table_schema,table_name)            WHERE B.table_schema NOT IN ('information_schema','mysql')            ORDER BY table_schema,table_name,index_name,seq_in_index        ) A        GROUP BY table_schema,table_name,index_name    ) AAORDER BY db,tb;

2. run

mysql -uroot -AN < /root/MakeSelectQueriesToLoad.sql > /root/SelectQueriesToLoad.sql

3. execute the following command every time you restart the database or push data before the full-database backup:

mysql -uroot < /root/SelectQueriesToLoad.sql > /dev/null 2>&1
2.3 do not store data in SWAP

For a dedicated MYSQL server, you can disable SWAP. for a shared server, make sure innodb_buffer_pool_size is large enough. Or use a fixed memory space for caching and the memlock command.

3. regular optimization and database reconstruction

Mysqlcheck-o-all-databases will increase ibdata1. the true optimization is only to reconstruct the data table structure:

CREATE TABLE mydb.mytablenew LIKE mydb.mytable;INSERT INTO mydb.mytablenew SELECT * FROM mydb.mytable;ALTER TABLE mydb.mytable RENAME mydb.mytablezap;ALTER TABLE mydb.mytablenew RENAME mydb.mytable;DROP TABLE mydb.mytablezap;
4. reduce disk write operations by 4.1. use the innodb_log_file_size write cache that is large enough

However, it should be noted that if the innodb_log_file_size of 1 GB is used, it will take 10 minutes to restore the server.

We recommend that you set innodb_log_file_size to 0.25 * innodb_buffer_pool_size.

4.2 innodb_flush_log_at_trx_commit

This option is closely related to disk write operations:

Innodb_flush_log_at_trx_commit = 1. each modification is written to the disk.
Innodb_flush_log_at_trx_commit = 0/2 writes to disk per second

If your application does not involve high security (financial system), or the infrastructure is secure enough, or the transaction is very small, you can use 0 or 2 to reduce disk operations.

4.3 avoid double write buffering
innodb_flush_method=O_DIRECT
5. improve disk read/write speed

RAID0, especially when using a virtual disk (EBS) such as EC2, is very important to use soft RAID 0.

6. make full use of index 6.1 to view the existing table structure and index
SHOW CREATE TABLE db1.tb1/G

6.2 add necessary indexes

Indexes are the only way to increase the query speed. for example, inverted indexes are the same principle for search engines.

The index must be added based on the query. for example, you can use the slow query log or query log, or use the EXPLAIN command to analyze and query the index.

ADD UNIQUE INDEXADD INDEX
6.2.1 for example, optimize the user verification table:

Add index

ALTER TABLE users ADD UNIQUE INDEX username_ndx (username);ALTER TABLE users ADD UNIQUE INDEX username_password_ndx (username,password);

Data is pushed every time the server is restarted.

echo “select username,password from users;” > /var/lib/mysql/upcache.sql

Add the startup script to my. cnf

[mysqld]init-file=/var/lib/mysql/upcache.sql
6.2.2 use an automatic index adding framework or an automatic table sharding framework

For example, a framework like Rails will automatically add indexes, while a framework like Drupal will automatically split the table structure. It will indicate the correct direction at the beginning of your development. Therefore, it is not a good practice for people with less experience to start from 0.

7. analyze query logs and slow query logs

Record all queries, which is useful in the ORM system or the system that generates query statements.

log=/var/log/mysql.log

Be sure not to use it in the production environment, otherwise it will occupy your disk space.

Query records that have been executed for more than 1 second:

long_query_time=1log-slow-queries=/var/log/mysql/log-slow-queries.log
8. use memory disks in a radical way

Now the infrastructure reliability is very high. for example, EC2 has almost no need to worry about server hardware as a machine. In addition, the memory is really cheap, it is easy to buy dozens of GB memory servers, you can use the memory disk, regular backup to the disk.

Migrate the MYSQL directory to a 4 GB memory disk

mkdir -p /mnt/ramdisksudo mount -t tmpfs -o size=4000M tmpfs /mnt/ramdisk/mv /var/lib/mysql /mnt/ramdisk/mysqlln -s /tmp/ramdisk/mysql /var/lib/mysqlchown mysql:mysql mysql
9. use MYSQL using NOSQL

B-TREE is still one of the most efficient indexes, and all MYSQL is still not outdated.

Using HandlerSocket to skip the SQL parsing layer of MYSQL, MYSQL becomes NOSQL.

10. LIMIT 1 is added at the end of other single queries to stop full table scanning.
Separate non-indexed data, such as storing large articles without affecting other automatic queries.
MYSQL built-in functions are not required, because built-in functions do not create query caches.
The connection speed of PHP is very fast, so you can skip the connection pool. Otherwise, the number of connections may exceed. Of course, the PHP program in the connection pool may also
The number of connections is full. for example, @ ignore_user_abort (TRUE) is used );
Use IP address instead of domain name as the database path to avoid DNS resolution problems

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.