MYSQL tuning and use must-read

Source: Internet
Author: User
Tags learn php

MYSQL should be the most popular WEB back-end database. The WEB development language has recently developed rapidly, with PHP, Ruby, Python, and Java features, although NOSQL has been mentioned more recently, but it is believed that most architects will choose MYSQL for data storage.

MYSQL so convenient and stable that we seldom think of it when developing WEB programs. Even if you think that optimization is a program level, for example, do not write overly resource-intensive SQL statements. But beyond that, there are still a lot of things that can be optimized on the whole system.

1. Select the appropriate storage engine: InnoDB

Unless your data sheet is used for read-only or full-text search (believe that you are now referring to full-text search, no one will use MYSQL), you should choose InnoDB by default.

You may find that MyISAM is faster than InnoDB at the time of testing, because MyISAM only caches indexes, and InnoDB caches data and indexes, MyISAM does not support transactions. But if you use Innodb_flush_log_at_trx_commit = 2 You can get close reading performance (a few times).

1.1 How to convert an existing MyISAM database to InnoDB:

mysql-u [user_name]-p-e "SHOW TABLES in [database_name];" | Tail-n +2 | Xargs-i ' {} ' echo ' ALTER TABLE {} engine=innodb; ' > Alter_table.sql

Perl-p-i-e ' s/(search_[a-z_]+ engine=) innodb/\1myisam/g ' Alter_table.sql

Mysql-u [user_name]-P [database_name] < Alter_table.sql

1.2 Create InnoDB FILE separately for each table:

Innodb_file_per_table=1

This ensures that the Ibdata1 file will not be too large and out of control. Especially when it comes to executing mysqlcheck-o–all-databases.

2. Ensure that data is read from memory and the data is stored in memory

2.1 big enough innodb_buffer_pool_size.

It is recommended that the data be fully stored in the innodb_buffer_pool_size, that is, the capacity of innodb_buffer_pool_size planned by storage. This allows you to read the data completely from memory, minimizing disk operations.

2.1.1 How to determine if innodb_buffer_pool_size is large enough that the data is read from memory instead of hard disk?

Method 1

Mysql> SHOW GLOBAL STATUS like ' innodb_buffer_pool_pages_% ';

+----------------------------------+--------+

| variable_name | Value |

+----------------------------------+--------+

| Innodb_buffer_pool_pages_data | 129037 |

| Innodb_buffer_pool_pages_dirty | 362 |

| innodb_buffer_pool_pages_flushed | 9998 |

| Innodb_buffer_pool_pages_free |  0 | !!!!!!!!

| Innodb_buffer_pool_pages_misc | 2035 |

| Innodb_buffer_pool_pages_total | 131072 |

+----------------------------------+--------+

6 rows in Set (0.00 sec)

When the innodb_buffer_pool_pages_free is found to be 0, the buffer pool has been exhausted and needs to be increased innodb_buffer_pool_size

InnoDB a few other parameters:

Innodb_additional_mem_pool_size = 1/200 of Buffer_pool

innodb_max_dirty_pages_pct 80%

Method 2

Or use the iostat-d-x-k 1 command to view the operation of the hard disk.

2.1.2 is there enough memory on the server to plan

Perform echo 1 >/proc/sys/vm/drop_caches to clear the operating system's file cache and see real memory usage.

2.2 Data preheating

By default, only one piece of data is read once before it is cached in Innodb_buffer_pool. As a result, the database has just started and data preheating is required to cache all the data on the disk into memory. Data preheating can improve read speed.

For the InnoDB database, the data can be warmed up in the following ways:

1. Save the following script as Makeselectqueriestoload.sql

SELECT DISTINCT

CONCAT (' SELECT ', ndxcollist, ' from ', db, '. ', TB,

' ORDER by ', Ndxcollist, '; ') Selectquerytoloadcache

From

(

SELECT

Engine,table_schema Db,table_name TB,

Index_name,group_concat (column_name ORDER by Seq_in_index) ndxcollist

From

(

SELECT

B.engine,a.table_schema,a.table_name,

A.index_name,a.column_name,a.seq_in_index

From

Information_schema.statistics A INNER JOIN

(

SELECT Engine,table_schema,table_name

From Information_schema.tables WHERE

Engine= ' InnoDB '

) B USING (table_schema,table_name)

WHERE B.table_schema not in (' Information_schema ', ' MySQL ')

ORDER by Table_schema,table_name,index_name,seq_in_index

) A

GROUP by Table_schema,table_name,index_name

) AA

ORDER by DB,TB

;

2. Implementation

Mysql-uroot-an </root/makeselectqueriestoload.sql >/root/selectqueriestoload.sql

3. Do this every time you restart the database, or if you need to warm up before backing up the whole library:

Mysql-uroot </root/selectqueriestoload.sql >/dev/null 2>&1

2.3 do not allow data to be stored in SWAP

If you are a dedicated MYSQL server, you can disable SWAP and, if it is a shared server, make sure Innodb_buffer_pool_size is large enough. Or use a fixed memory space for caching, using the Memlock directive.

3. Regular optimization of the rebuilding database

Mysqlcheck-o –all-databases will make ibdata1 grow, real optimization only to reconstruct the data table structure:

CREATE TABLE mydb.mytablenew like mydb.mytable;

INSERT into Mydb.mytablenew SELECT * from mydb.mytable;

ALTER TABLE mydb.mytable RENAME mydb.mytablezap;

ALTER TABLE mydb.mytablenew RENAME mydb.mytable;

DROP TABLE Mydb.mytablezap;

4. Reduce disk write operations

4.1 Use a large enough write cache innodb_log_file_size

However, it is important to note that if you use a 1G innodb_log_file_size, it takes 10 minutes to recover if the server is a machine.

Recommended Innodb_log_file_size set to 0.25 * innodb_buffer_pool_size

4.2 Innodb_flush_log_at_trx_commit

This option is closely related to the write disk operation:

innodb_flush_log_at_trx_commit = 1 write to disk every time you modify it

innodb_flush_log_at_trx_commit = 0/2 write to disk per second

If your application does not involve high security (financial systems), or the infrastructure is secure enough, or the transaction is small, you can reduce disk operations by 0 or both.

4.3 Avoid double-write buffering

Innodb_flush_method=o_direct

5. Improve disk read and write speed

RAID0 especially when using EC2 virtual Disk (EBS), it is very important to use soft RAID0.

6. Full use of the index

6.1 View Existing table structures and indexes

SHOW CREATE TABLE db1.tb1\g

6.2 Add the necessary indexes

Indexing is the only way to improve query speed, such as a search engine with inverted index is the same principle.

The addition of the index needs to be determined by the query, such as by slow query log or query log, or by EXPLAIN command to parse the query.

ADD UNIQUE INDEX

ADD INDEX

6.2.1 For example, optimize the user authentication table:

Add index

ALTER TABLE users ADD UNIQUE INDEX username_ndx (username);

ALTER TABLE users ADD UNIQUE INDEX username_password_ndx (Username,password);

Data preheating every time you restart the server

echo "Select Username,password from Users;" >/var/lib/mysql/upcache.sql

Add startup script to My.cnf

[Mysqld]

Init-file=/var/lib/mysql/upcache.sql

6.2.2 frames that use auto-indexed frames or auto-split table Structures

For example, a framework such as Rails will automatically add an index, and a framework like Drupal will automatically split the table structure. Will indicate the right direction at the beginning of your development. Therefore, the less experienced people from the beginning of the pursuit of starting from 0 to build, is actually bad practice.

7. Analyze query logs and slow query logs

Log all queries, which are useful for systems that use an ORM system or generate query statements.

Log=/var/log/mysql.log

Be careful not to use it in a production environment, or it will fill your disk space.

Record a query that takes more than 1 seconds to execute:

Long_query_time=1

Log-slow-queries=/var/log/mysql/log-slow-queries.log

8. Aggressive methods, using a memory disk

The reliability of the infrastructure is now very high, such as EC2 almost no need to worry about server hardware. And the memory is really cheap, it is easy to buy dozens of g of memory of the server, you can use the memory disk, regular backup to disk.

Migrating a MYSQL directory to a 4G memory disk

Mkdir-p/mnt/ramdisk

sudo mount-t tmpfs-o size=4000m tmpfs/mnt/ramdisk/

Mv/var/lib/mysql/mnt/ramdisk/mysql

Ln-s/tmp/ramdisk/mysql/var/lib/mysql

Chown Mysql:mysql MySQL

9. using MYSQL in a NOSQL way

B-tree is still one of the most efficient indexes, and all MYSQL is still not obsolete.

By skipping MySQL's SQL parsing layer with Handlersocket, MySQL is really becoming NOSQL.

other

A single query finally adds LIMIT 1 to stop the full table scan.

Separating non-"indexed" data, such as separating large articles from storage, does not affect other automatic queries.

No MYSQL built-in functions, because built-in functions do not establish a query cache.

PHP The established connection speed is very fast, all can not connect the pool, otherwise it may cause more than the number of connections. Of course, without the connection pool PHP program may also be

The number of connections is full, such as @ignore_user_abort (TRUE);

Use IP instead of domain name to do database path, avoid DNS parsing problem

end .

You will find that after optimization, the performance of the database is increased several times to hundreds of times. So MYSQL basically still can apply most of the scene of the application. The cost of optimizing an existing system is much lower than the system refactoring or migrating to NOSQL.

free pick up brother Lian it education original Linux OPS engineer video / detail linux tutorial, For more information, please contact: http://www.lampbrother.net/linux /

learn PHP, Linux, HTML5, UI, Android and other video tutorials (Courseware + notes + video)! Contact Q2430675018

Linux Exchange Group: 478068715 welcome everyone to join!


MYSQL tuning and use must-read

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.