Mysql common optimization
Optimize Group By statementsBY default, MySQL sorts all group by col1, col2 ,..... The query method is as follows: specify order by col1, col2,... in the query ,.... If the statement explicitly contains an order by clause containing the same columns, MySQL can optimize it without slowing down, even though it still performs sorting. If the query includes group by but you want to avoid consumption of sorting results, you can specify order by null to prohibit sorting.
Optimize the Order by statementIn some cases, MySQL can use an index to satisfy the order by clause without additional sorting. The where condition and order by condition use the same index, and the order by order is the same as the index order, and the order by field is both ascending or descending.
Optimize insert statementsIf you INSERT multiple rows from the same customer at the same time, use the INSERT statement of multiple value tables. This is faster than using separate INSERT statements (several times in some cases ). Insert into test values (1, 2), (1, 3), (1, 4 )... If you INSERT many rows from different customers, you can use the insert delayed statement to get a higher speed. The meaning of Delayed is to let the insert statement be executed immediately. In fact, the data is put in the memory queue and is not actually written to the disk; this is much faster than inserting each statement separately; LOW_PRIORITY, on the contrary, is inserted only after all other users read and write the table. Store Index files and data files on different disks (using the options in table creation). if batch insertion is performed, you can increase the speed by adding the bulk_insert_buffer_size variable value. However, this can only be used for myisam tables. when a table is loaded from a text file, load data infile is used. This is usually 20 times faster than using many INSERT statements; using replace statements instead of insert based on the application; using ignore keywords based on the application to ignore duplicate records.
Insert data in large batches1. for tables of the Myisam type, you can import a large amount of data quickly using the following methods.
ALTER TABLE tblname DISABLE KEYS;loading the dataALTER TABLE tblname ENABLE KEYS;
These two commands enable or disable the update of non-unique indexes in the Myisam table. When importing a large amount of data to a non-empty Myisam table, you can improve the import efficiency by setting these two commands. To import a large amount of data to an empty Myisam table, the index is created only after the data is imported first by default, so you do not need to set it. 2. for Innodb tables, this method cannot improve the efficiency of data import. For Innodb tables, we have the following methods to improve the import efficiency:. because Innodb tables are saved in the order of primary keys, the imported data is arranged in the order of primary keys, which can effectively improve the efficiency of data import. If the Innodb table does not have a primary key, an internal column is created by default as the primary key. Therefore, if you can create a primary key for the table, you can use this advantage to improve the efficiency of data import.
B. run SET UNIQUE_CHECKS = 0 before the data is imported, disable the uniqueness check, and run SETUNIQUE_CHECKS = 1 after the import to restore the uniqueness check, which improves the import efficiency. C. if the application uses the automatic submission method, we recommend that you execute set autocommit = 0 before import, disable automatic submission, and then execute set autocommit = 1 after import. enable automatic submission, it can also improve the import efficiency.
Query optimizationFor read-oriented users, you can set low_priority_updates = 1, and lower the write priority, telling MYSQL to try to process read first and optimize your query for the query cache.
Query cache is enabled on most MySQL servers. This is one of the most effective ways to improve performance, and it is processed by the MySQL database engine. When many identical queries are executed multiple times, these query results are stored in a cache, the cache results are directly accessed for the same query in the future without having to operate the table.
The main problem here is that this is easy for programmers to ignore. Because some of our query statements will make MySQL not use cache. See the following example:
// The query cache does not enable $ r = mysql_query ("SELECT username FROM user WHERE signup_date> = CURDATE ()"); // enable query cache $ today = date ("Y-m-d"); $ r = mysql_query ("SELECT username FROM user WHERE signup_date> = '$ today '");
Split large DELETE or INSERT statements
If you need to execute a large DELETE or INSERT query on an online website, you need to be very careful to avoid your operations to stop the entire website. Because these two operations lock the table, once the table is locked, other operations cannot be performed.
Apache has many sub-processes or threads. Therefore, it works very efficiently, and our server does not want to have too many sub-processes, threads, and database connections, which greatly occupy server resources, especially memory.
If you lock your table for a period of time, such as 30 seconds, for a site with high access traffic, the access process/thread and database link accumulated over the past 30 seconds, the number of opened files may not only cause you to park the WEB service Crash, but also cause your entire server to Crash immediately.
Therefore, if you have a large processing capacity, you must split it. Using the LIMIT condition is a good method. The following is an example:
While (1) {// Only 1000 mysql_query ("delete from logs WHERE log_date <= '2017-11-01 'LIMIT each time1000"); If (mysql_affected_rows () =0) {// You cannot delete it. exit! Break;} // you have to take a rest every time (50000);}
Where statement optimization
1. avoid expression operations on fields in the where clause whenever possible
Select id from uinfo_jifen where jifen/60> 10000;
After optimization:
Select id from uinfo_jifen where jifen> 600000;
2. do not perform function operations on fields in the where clause as much as possible. this will cause mysql to discard the use of indexes.
Select uid from imid where datediff (create_time, '2017-11-22 ') = 0
After optimization
Select uid from imid where create_time> = '2017-11-21 'and create_time <'2017-11-23 ';
Index optimization
MySQL uses indexes only for the following operators: <, <=, =,>,> =, BETWEEN, IN, and sometimes LIKE.
Try not to write! = Or <> SQL, with between or> and <代替,否则可能用不到索引< p>
Order by, Group by, and Distinct it is best to create an index on the column to facilitate index sorting.
Use mysql index sorting whenever possible
Force index (index_name)
Avoid innodb using large fields as the primary key
Indexes should be created for fields frequently used as query conditions;
Highly selective fields are suitable for creating indexes;
Indexes must be created for fields associated with tables.
Frequently updated fields are not suitable for index creation;
Fields that do not appear in the WHERE clause should not be indexed.
Fields with low selectivity are not suitable for independent index creation.
Try not to use subquery
Mysql> explain select uid _, count (*) from smember_6 where uid _ in (select uid _ from alluid) group by uid _; | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | + ---- + keys + ----------- + ------- + ------------- + --------- + ------ + ---------- + -------------------------- + |1| PRIMARY | smember_6 | index | NULL | PRIMARY |8| NULL |53431264| Using where; Using index |2| Dependent subquery | alluid | ALL | NULL |2448| Using where | -- optimized | mysql> explain select. uid _, count (*) from smember_6 a, alluid B where. uid _ = B. uid _ group by uid _; + ---- + ------------- + ------- + ------ + --------------- + --------- + ------------ + ------ + keys + | id | select_type | table | type | possible_keys | key | key_len | ref | rows | extra | + ---- + ------------- + ------- + ------ + --------------- + --------- + ------------ + ------ + --------------------------------- + |1| SIMPLE | B | ALL | NULL |2671| Using temporary; Using filesort |1| SIMPLE | a | ref | PRIMARY |4| Ssc. B. uid _ |1| Using index
Optimization of Join
If your application has many JOIN queries, you should confirm that the Join fields in the two tables are indexed. In this way, MySQL will launch a mechanism to optimize the Join SQL statement for you.
In addition, these fields used for Join should be of the same type. For example, if you want to Join a DECIMAL field with an INT field, MySQL cannot use their indexes. For those STRING types, the same character set is required. (The character sets of the two tables may be different)
Table optimization
Use not null as much as possible
Unless you use the NULL value for a special reason, you should always keep your field not null.
Do not think that NULL requires no space. it requires additional space. In addition, when you compare, your program will be more complex.
Of course, this does not mean that you cannot use NULL. The reality is very complicated. in some cases, you still need to use NULL values.
The following is an excerpt from MySQL's own document:
"NULL columns require additional space in the row to record whether their values are NULL. For MyISAM tables, each NULL column takes one bit extra, rounded up to the nearest byte ."
Tables with a fixed length are faster.
If all the fields in the table are "fixed length", the entire table will be considered as "static" or "fixed-length ". For example, the table does not have the following types of fields: VARCHAR, TEXT, BLOB. As long as you include one of these fields, this table is not a "static table with a fixed length". In this way, the MySQL engine will use another method for processing.
A fixed-length table improves performance because MySQL searches faster, because these fixed-length tables are easy to calculate the offset of the next data, so reading will naturally be fast. If the field is not fixed, the program needs to find the primary key for each query.
In addition, tables with a fixed length are more easily cached and rebuilt. However, the only side effect is that a field with a fixed length will waste some space, because a field with a fixed length will be allocated so much space no matter you use it.
Vertical segmentation
Vertical segmentation is a way to convert tables in the database into several tables by column, which can reduce the complexity of the table and the number of fields, so as to achieve optimization. (I used to work on projects in a bank. I have seen more than 100 fields in a table, which is terrible)
Example 1: In the Users table, a field is the home address, which is an optional field. In comparison, besides personal information, you do not need to read or rewrite this field frequently. So why don't I put him in another table? This will make your table have better performance. if you think about it in a large number of cases, only user IDs, user names, passwords, and user roles will be frequently used in user tables. Small tables always have good performance.
Example 2: You have a field named "last_login", which will be updated every time a user logs on. However, each update will clear the query cache of the table. Therefore, you can put this field in another table, which will not affect your constant reading of user IDs, usernames, and user roles, because the query cache will help you increase a lot of performance.
In addition, you need to note that the tables formed by the split fields do not often Join them. otherwise, such performance will be worse than when there is no division, and it will be a very few decline.