Reprinted from: http://blog.eood.cn/mysql#rd?sukey= Fc78a68049a14bb29c60f21c5254b15a1a9234459cf25ff467de14129ca1193806f26d2b87fb50dec98292d5996d09a7
MYSQL should be the most popular WEB back-end database. The WEB development language has recently developed rapidly, with PHP, Ruby, Python, and Java features, although NOSQL has been mentioned more recently, but it is believed that most architects will choose MYSQL for data storage.
MYSQL is so convenient and stable that we seldom think about it when developing WEB programs. Even if you think that optimization is a program level, for example, do not write overly resource-intensive SQL statements. But beyond that, there are still a lot of things that can be optimized on the whole system.
1. Select the appropriate storage engine: INNODB
Unless your data sheet is used for read-only or full-text search (believe that you are now referring to full-text search, no one will use MYSQL), you should choose INNODB by default.
You may find that MYISAM is faster than INNODB at the time of testing, because MYISAM only caches indexes, and INNODB caches data and indexes, MYISAM does not support transactions. But if you use Innodb_flush_log_at_trx_commit = 2 You can get close reading performance (a few times).
1.1 How to convert an existing MYISAM database to INNODB:
mysql-u [user_name]-p-e "SHOW TABLES in [database_name];" | Tail-n +2 | Xargs-i ' {} ' echo ' ALTER TABLE {} engine=innodb; ' > Alter_table.sqlperl-p-i-e ' s/(search_[a-z_]+ engine=) innodb/\1m Yisam/g ' Alter_table.sqlmysql-u [user_name]-P [database_name] < Alter_table.sql
1.2 Create INNODB FILE separately for each table:
Innodb_file_per_table=1
This ensures that the Ibdata1 file will not be too large and out of control. Especially when it comes to executing mysqlcheck-o–all-databases.
2. Ensure that data is read from memory and that data is stored in memory 2.1 large enough innodb_buffer_pool_size
It is recommended that the data be fully stored in the innodb_buffer_pool_size, that is, the capacity of innodb_buffer_pool_size planned by storage. This allows you to read the data completely from memory, minimizing disk operations.
2.1.1 How to determine innodb_buffer_pool_size is large enough that the data is read from memory instead of hard disk?
Method 1
Mysql> SHOW GLOBAL STATUS like ' innodb_buffer_pool_pages_% '; +----------------------------------+--------+| Variable_name | Value |+----------------------------------+--------+| Innodb_buffer_pool_pages_data | 129037 | | Innodb_buffer_pool_pages_dirty | 362 | | innodb_buffer_pool_pages_flushed | 9998 | | Innodb_buffer_pool_pages_free | 0 | !!!!!!!! | Innodb_buffer_pool_pages_misc | 2035 | | Innodb_buffer_pool_pages_total | 131072 |+----------------------------------+--------+6 rows in Set (0.00 sec)
When the innodb_buffer_pool_pages_free is found to be 0, the buffer pool has been exhausted and needs to be increased innodb_buffer_pool_size
Several other parameters of INNODB
Innodb_additional_mem_pool_size = 1/200 of Buffer_pool
innodb_max_dirty_pages_pct 80%
Method 2
Or use the iostat-d-x-k 1 command to view the operation of the hard disk.
Is there enough memory on the 2.1.2 server to plan
Perform echo 1 >/proc/sys/vm/drop_caches to clear the operating system's file cache and see real memory usage.
2.2 Data preheating
By default, only one piece of data is read once before it is cached in Innodb_buffer_pool. As a result, the database has just started and data preheating is required to cache all the data on the disk into memory. Data preheating can improve read speed.
For the InnoDB database, the data can be warmed up in the following ways:
1. Save the following script as Makeselectqueriestoload.sql
Select DISTINCT CONCAT (' select ', Ndxcollist, ' from ', db, '. ', terabytes, ' ORDER by ', Ndxcollist, '; ') Selectquerytoloadcache from ( SELECT engine,table_schema db,table_name TB, Index_name,group_ CONCAT (column_name ORDER by Seq_in_index) ndxcollist from ( SELECT B.engine,a.table_schema, A.table_name, a.index_name,a.column_name,a.seq_in_index from information_schema.statistics A INNER JOIN ( SELECT engine,table_schema,table_name from information_schema.tables WHERE engine = ' InnoDB ' ) B USING (table_schema,table_name) WHERE B.table_schema not in (' Information_schema ', ' MySQL ') ORDER by Table_schema,table_name,index_name,seq_in_index ) A GROUP by Table_schema,table_name,index_name ) Aaorder by DB,TB;
2. Implementation
Mysql-uroot-an </root/makeselectqueriestoload.sql >/root/selectqueriestoload.sql
3. Do this every time you restart the database, or if you need to warm up before backing up the whole library:
Mysql-uroot </root/selectqueriestoload.sql >/dev/null 2>&1
2.3 Do not allow data to be stored in SWAP
If you are a dedicated MYSQL server, you can disable SWAP and, if it is a shared server, make sure Innodb_buffer_pool_size is large enough. Or use a fixed memory space for caching, using the Memlock directive.
3. Regular optimization of the rebuilding database
Mysqlcheck-o–all-databases will make ibdata1 grow, real optimization only to reconstruct the data table structure:
CREATE TABLE mydb.mytablenew like Mydb.mytable;insert to Mydb.mytablenew SELECT * from mydb.mytable; ALTER TABLE mydb.mytable RENAME mydb.mytablezap; ALTER table mydb.mytablenew RENAME mydb.mytable;drop table mydb.mytablezap;
4. Reduce disk write Operations 4.1 Use a large enough write cache
Innodb_log_file_size
However, it is important to note that if you use a 1G innodb_log_file_size, it takes 10 minutes to recover if the server is a machine.
Recommended innodb_log_file_size = 0.25 Innodb_buffer_pool_size
4.2 Innodb_flush_log_at_trx_commit
This option is closely related to the write disk operation:
Innodb_flush_log_at_trx_commit = 1 writes to disk each time it is modified
Innodb_flush_log_at_trx_commit = 0/2 writes to disk per second
If your application does not involve high security (financial systems), or the infrastructure is secure enough, or the transaction is small, you can use
0 or lower disk operations.
4.3 Avoid double-write buffering
Innodb_flush_method=o_direct
5. Improve disk read and write speed
RAID0 the use of soft RAID0 is especially important when using EC2 as a virtual disk (EBS).
6. Fully use index 6.1 to view existing table structures and indexes
SHOW CREATE TABLE db1.tb1\g
6.2 Adding the necessary indexes
Indexing is the only way to improve query speed, such as a search engine with inverted index is the same principle.
The addition of the index needs to be determined by the query, such as by slow query log or query log, or by EXPLAIN command to parse the query.
ADD UNIQUE INDEX
ADD INDEX
6.2.1 For example, optimize the user authentication table:
Add index
ALTER TABLE users ADD UNIQUE INDEX username_ndx (username);
ALTER TABLE users ADD UNIQUE INDEX username_password_ndx (Username,password);
Data preheating every time you restart the server
echo "Select Username,password from Users;" >/var/lib/mysql/upcache.sql
Add startup script to My.cnf
[Mysqld]
Init-file=/var/lib/mysql/upcache.sql
6.2.2 frame with auto-indexed frame or auto-split table structure
For example, a framework such as Rails will automatically add an index, and a framework like Drupal will automatically split the table structure. Will indicate the right direction at the beginning of your development. Therefore, the less experienced people from the beginning of the pursuit of starting from 0 to build, is actually bad practice.
7. Analyze query logs and slow query logs
Log all queries, which are useful for systems that use an ORM system or generate query statements.
Log=/var/log/mysql.log
Be careful not to use it in a production environment, or it will fill your disk space.
Record a query that takes more than 1 seconds to execute
Long_query_time=1
Log-slow-queries=/var/log/mysql/log-slow-queries.log
8. Aggressive methods, using a memory disk
The reliability of the infrastructure is now very high, such as EC2 almost no need to worry about server hardware. And the memory is really cheap, it is easy to buy dozens of g of memory of the server, you can use the memory disk, regular backup to disk.
Migrating a MYSQL directory to a 4G memory disk
Mkdir-p/mnt/ramdisksudo mount-t tmpfs-o size=4000m tmpfs/mnt/ramdisk/mv/var/lib/mysql/mnt/ramdisk/mysqlln-s/tmp/r Amdisk/mysql/var/lib/mysqlchown Mysql:mysql MySQL
9. Using MYSQL in a NOSQL way
B-tree is still one of the most efficient indexes, and all MYSQL is still not obsolete.
By skipping MySQL's SQL parsing layer with Handlersocket, MySQL is really becoming NOSQL.
10. Other
- A single query finally adds LIMIT 1 to stop the full table scan.
- Separating non-"indexed" data, such as separating large articles from storage, does not affect other automatic queries.
- No MYSQL built-in functions, because built-in functions do not establish a query cache.
- PHP is very fast to establish connections, all of which can be used without a connection pool, or it may cause more than the number of connections. Of course, without the connection pool PHP program may also be
- The number of connections is full, such as @ignore_user_abort (TRUE);
- Use IP instead of domain name to do database path, avoid DNS parsing problem
11. End
You will find that after optimization, the performance of the database is increased several times to hundreds of times. So MYSQL basically still can apply most of the scene of the application. The cost of optimizing an existing system is much lower than the system refactoring or migrating to NOSQL.
MYSQL tuning and use must-read