As a mysql dba, You need to regularly check the indexing status of the server, especially when you arrive at a new environment and start to take over some database maintenance work, you need to know the indexing usage of online servers. If the index settings are unreasonable, the server performance will be greatly affected, especially when SQL statements are complicated (such as multi-table joint queries ), this article is just a summary of my work. If you have a better index transformation solution, please give us some advice.
The index transformation mentioned here is mainly divided into the following stages:
1. Remove duplicate indexes www.2cto.com 1. why remove duplicate indexes?. excessive indexes occupy disk space, causing unnecessary disk ioB. excessive indexes will cause the database to slow down the selection of indexes, especially when there are more indexes (the associated indexes will affect the selection of indexes) C. duplicate indexes will slow table updates.
2. how to find duplicate indexes here we will introduce you to a useful Maatkit tool. After installing the Maatkit tool, there will be a mk-duplicate-key-checker command, this command is used to detect duplicate indexes in the database and automatically generate statements for deleting duplicate indexes, which is very convenient.
The following describes how to install the Maatkit tool and how to use the mk-duplicate-key-checker command.
Maatkit tool installation: wget http://maatkit.googlecode.com/files/maatkit-7540.tar.gztar zxvf maatkit-7540.tar.gz cd maatkit-7540 www.2cto.com perl Makefile. PL make installmk-duplicate-key-checker command: mk-duplicate-key-checker -- databases = databasename -- user = root -- password = passwd you only need to specify the database name here, if you want to know other parameters, run the mk-duplicate-key-checker-help command. Note: We would like to remind you that after executing the statement to delete duplicate indexes, you need to use this tool to re-check it, because a new duplicate index may appear after the statement is deleted, especially for PHPCMS V9 databases.
2. Remove unnecessary indexes. What is unnecessary indexes? I understand that some fields have low degree of recognition. For example, if the abolish field is only 0 and 1, there is no need to create an index, because index scanning is almost faster than full-Table scanning, it may even be slower to use index scanning. If you really want to create a composite index based on the query conditions and other fields, it will be better.
1. Why remove unnecessary indexes is similar to removing duplicate indexes, which is not detailed here.
2. You can use the STATISTICS table of information_schema to find out unnecessary indexes and then manually filter the indexes. There should be a better way. For example, in the following query, You can query the column indexes with low degree of recognition for the specified database: SELECT TABLE_NAME, INDEX_NAME, COLUMN_NAME, cardinality from 'information _ schema '. 'Statistics 'WHERE TABLE_SCHEMA = 'databasename' and cardinality <= 5; you can troubleshoot the problem based on your actual situation.
3. What are necessary indexes for adding required indexes? My understanding is simple. indexes that affect query performance are mandatory.
1. Why does the required index affect the query performance of the database? It is very simple and a good reason!
2. how can I find the required index www.2cto.com? I think this question is quite technical. My steps are as follows:. to locate SQL statements that have poor performance or are not indexed, you need to make some settings, such as setting long_query_time to 0.2 seconds, based on your own environment, open the log_queries_not_using_indexes parameter and run the set global log_queries_not_using_indexes = on; command. You can use the mysqldumpslow command to collect statistics on slow query logs. For example, you can extract logs based on the execution duration or the number of execution times or query records. List two very common combinations: mysqldumpslow-s t-t 10 slow. log # Find 10 slow queries that consume the longest time SQLmysqldumpslow-s c-t 10 slow. log # Find the top 10 SQB queries with the most execution times. analyze the SQL statements with poor performance and no indexes through explain and profiling, and determine the index to be added (or rewrite the corresponding SQL, which is not covered in this article ), for more information about the usage of explain and profiling, see relevant documents. The author Fei Hong has no marks