Analysis of index transformation in MYSQL management _ MySQL

Source: Internet
Author: User
Analysis of index transformation in MYSQL management
As a mysql dba, you need to regularly check the indexing status of the server, especially when you arrive at a new environment and start to take over some database maintenance work, you need to know the indexing usage of online servers. If the index settings are unreasonable, the server performance will be greatly affected, especially when SQL statements are complicated (such as multi-table joint queries ), this article is just a summary of my work. if you have a better index transformation solution, please give us some advice.
The index transformation mentioned here is mainly divided into the following stages:
1. remove duplicate indexes 1. why remove duplicate indexes?. excessive indexes occupy disk space, causing unnecessary disk ioB. excessive indexes will cause the database to slow down the selection of indexes, especially when there are more indexes (the associated indexes will affect the selection of indexes) C. duplicate indexes will slow table updates.
2. how to find duplicate indexes here we will introduce you to a useful Maatkit tool. after installing the Maatkit tool, there will be a mk-duplicate-key-checker command, this command is used to detect duplicate indexes in the database and automatically generate statements for deleting duplicate indexes, which is very convenient.
The following describes how to install the Maatkit tool and how to use the mk-duplicate-key-checker command.
Installation of Maatkit tool: wget http://maatkit.googlecode.com/files/maatkit-7540.tar.gztar zxvf maatkit-7540.tar.gz cd maatkit-7540 perl Makefile. PL make installmk-duplicate-key-checker command: mk-duplicate-key-checker -- databases = databasename -- user = root -- password = passwd you only need to specify the database name here, if you want to know other parameters, run the mk-duplicate-key-checker-help command. Note: We would like to remind you that after executing the statement to delete duplicate indexes, you need to use this tool to re-check it, because a new duplicate index may appear after the statement is deleted, especially for PHPCMS V9 databases.
2. remove unnecessary indexes. what is unnecessary indexes? I understand that some fields have low degree of recognition. for example, if the abolish field is only 0 and 1, there is no need to create an index, because index scanning is almost faster than full-table scanning, it may even be slower to use index scanning. If you really want to create a composite index based on the query conditions and other fields, it will be better.
1. Why remove unnecessary indexes is similar to removing duplicate indexes, which is not detailed here.
2. you can use the STATISTICS table of information_schema to find out unnecessary indexes and then manually filter the indexes. There should be a better way. For example, in the following query, you can query the column indexes with low degree of recognition for the specified database: SELECT TABLE_NAME, INDEX_NAME, COLUMN_NAME, cardinality from 'information _ schema '. 'Statistics 'WHERE TABLE_SCHEMA = 'databasename' and cardinality <= 5; you can troubleshoot the problem based on your actual situation.
3. what are necessary indexes for adding required indexes? my understanding is simple. indexes that affect query performance are mandatory.
1. Why does the required index affect the query performance of the database? it is very simple and a good reason!
2. how can I find the required indexes? I think they are more technical. my steps are as follows:. to locate SQL statements that have poor performance or are not indexed, you need to make some settings, such as setting long_query_time to 0.2 seconds, based on your own environment, open the log_queries_not_using_indexes parameter and run the set global log_queries_not_using_indexes = on; command. You can use the mysqldumpslow command to collect statistics on slow query logs. for example, you can extract logs based on the execution duration or the number of execution times or query records. List two very common combinations: mysqldumpslow-s t-t 10 slow. log # Find 10 slow queries that consume the longest time SQLmysqldumpslow-s c-t 10 slow. log # find the top 10 SQB queries with the most execution times. analyze the SQL statements with poor performance and no indexes through explain and profiling, and determine the index to be added (or rewrite the corresponding SQL, which is not covered in this article ), for more information about the usage of explain and profiling, see relevant documents. Author: Feihong without trace bitsCN.com

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.