1. Query, can not use * do not use, try to write full field name.
2. The more indexes are not the better, each table is controlled within 6 indexes. The index does not work in the case of a range where condition, such as where value<100
3. In most cases, the connection efficiency is much greater than the subquery, but there are exceptions. When you are not satisfied with the efficiency of the connection query can try to test the subquery, although most of the time you will be more disappointed, but there are always surprises when it is not mody ...
4. Multi-use Explain and profile analysis query statements
5. Sometimes 1 large SQL can be divided into a few small SQL sequence execution, divided it, the speed will be much faster.
6. table_name ENGINE=INNODB with ALTER TABLE at intervals;
7. Note When connecting: the principle of small table jion large table
8. Learn to use explain and profile to determine what is causing your SQL to slow down.
9. Check the slow query log, find the execution time of the SQL to try to optimize it ~ ~
Optimize GROUP BY statement
by default, MySQL sorts all group by col1,col2, .... The query method is as specified in the query by Col1,col2, .... If you explicitly include an ORDER BY clause that contains the same column, MySQL can optimize it without slowing down, although it is still sorted. If the query includes group by but you want to avoid the consumption of the sort result, you can specify order by NULL to prohibit sorting.
optimize ORDER BY statement
in some cases, MySQL can use an index to satisfy an ORDER BY clause without needing an extra sort. The Where condition and order by use the same index, and the ordering by is the same as the index order, and the fields of the orders by are ascending or descending.
Optimizing INSERT Statements
If you insert many rows from the same client at the same time, insert statements that use multiple value tables. This is faster than using separate INSERT statements (several times in some cases).
| The code is as follows |
Copy Code |
| Insert into test values (1,2), (1,3), (1,4) ... |
If you insert many rows from different customers, you can get a higher speed by using the Insert delayed statement. The meaning of the delayed is that the INSERT statement is executed immediately, in fact, the data is placed in the memory queue, and there is no real write to disk; This is much faster than each statement; low_priority just the opposite, inserts after all other users have read and write to the table.
Storing index files and data files on separate disks (using the options in the Build table);
If you do a bulk INSERT, you can increase the Bulk_insert_buffer_size variable value to increase the speed, but this can only be used for the MyISAM table
When loading a table from a text file, use the load DATA INFILE. This is usually 20 times times faster than using many INSERT statements;
Use Replace statement instead of insert according to application;
Use the Ignore keyword to ignore duplicate records, depending on the application.
Insert data in large quantities
1. For MyISAM types of tables, you can quickly import large amounts of data in the following ways.
| The code is as follows |
Copy Code |
ALTER TABLE tblname DISABLE KEYS; Loading the data ALTER TABLE tblname ENABLE KEYS; |
These commands are used to turn on or off updates to MyISAM tables that are not unique indexes. When you import a large amount of data into a non-empty MyISAM table, you can increase the efficiency of the import by setting both commands. For importing large amounts of data to an empty MyISAM table, the default is to import data before creating an index, so you don't have to set it up.
2. For InnoDB types of tables, this approach does not improve the efficiency of importing data. For InnoDB types of tables, there are several ways to increase the efficiency of the import:
A. Because InnoDB types of tables are saved in the order of the primary key, the imported data is arranged in the order of the primary key, which can effectively improve the efficiency of the imported data. If the InnoDB table does not have a primary key, the system creates an internal column as the primary key by default, so if you can create a primary key to the table, you can use this advantage to improve the efficiency of importing the data.
B. Perform set unique_checks=0 before importing data, turn off uniqueness checksum, perform setunique_checks=1 after the import is finished, and restore the uniqueness checksum, which can improve the efficiency of the import.
C. If the application uses autocommit, it is recommended that you perform set autocommit=0 before importing, turn off automatic submission, and then execute set autocommit=1 after the import is finished, and turn on automatic submission, which can also increase the efficiency of the import.
Optimization of Queries
read-oriented can be set Low_priority_updates=1, write the priority to lower, tell MySQL as much as possible to deal with the first reading request
optimize your query for query caching
most MySQL servers have query caching turned on. This is one of the most effective ways to improve sex, and it is handled by the MySQL database engine. When many of the same queries are executed many times, the query results are placed in a cache so that subsequent queries do not directly access the cached results without the action table.
The main problem here is that it's easy to ignore for programmers. Because some of our query statements will let MySQL not use caching. Take a look at the following example:
| The code is as follows |
Copy Code |
Query Cache not open $r = mysql_query ("Select username from user WHERE signup_date >= curdate ()");
Open Query Cache $today = Date ("y-m-d"); $r = mysql_query ("Select username from user WHERE signup_date >= ' $today '"); |
Split a large DELETE or INSERT statement
If you need to perform a large DELETE or INSERT query on an online Web site, you need to be very careful to avoid your actions so that your entire site stops accordingly. Because these two operations will lock the table, the table is locked, no other operation can enter.
Apache will have a lot of child processes or threads. So, it works fairly efficiently, and our servers don't want to have too many child processes, threads and database links, which are great for server resources, especially memory.
If you lock your watch for a while, for example 30 seconds, then for a highly visited site, this 30-second accumulation of access processes/threads, database links, the number of open files, may not only let you park Web services crash, but also may make your entire server immediately? Soy, excuse me?/P >
So, if you have a big deal, you're going to have to split it up, and using the LIMIT condition is a good way to do it. Here is an example:
| The code is as follows |
Copy Code |
while (1) { Only 1000 at a time. mysql_query ("DELETE from logs WHERE log_date <= ' 2009-11-01 ' LIMIT 1000"); if (mysql_affected_rows () = = 0) { There's no delete, quit! Break } I have to take a break every time. Usleep (50000); }
|
The optimization of the WHERE statement
1. Try to avoid the expression of the field in the WHERE clause
| The code is as follows |
Copy Code |
Select ID from Uinfo_jifen where jifen/60 > 10000; After optimization: Select ID from Uinfo_jifen where jifen>600000; |
2. Avoid functional operations of fields in the WHERE clause, which will cause MySQL to discard the use of the index
| The code is as follows |
Copy Code |
Select UID from IMiD where DateDiff (create_time, ' 2011-11-22 ') =0 After optimization Select UID from imid where create_time> = ' 2011-11-21 ' and create_time< ' 2011-11-23 '; |
Optimization of indexes
MySQL uses indexes only on the following operators: <,<=,=,>,>=,between,in, and some times like.
Try not to write!= or <> SQL, replace with between or > and <, or you may not be able to use the index
Order BY, Group by, Distinct it is best to index this column to facilitate indexing
Try to use MySQL index to sort
No way, using the Forced Index Force index (INDEX_NAME)
Try to avoid Mian InnoDB use very large fields as primary keys
The more frequent fields as query criteria should create an index;
High-selectivity fields are better suited to create indexes;
As a table association field, it is generally required to create indexes.
Fields that are updated very frequently are not appropriate to create indexes;
Fields that do not appear in the WHERE clause should not create an index.
Fields with too low selectivity are not suitable for creating indexes individually
Try not to use subqueries
| The code is as follows |
Copy Code |
| mysql> explain select uid_,count (*) from smember_6 where Uid_ in (select Uid_ to Alluid) group by Uid_; | ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra | +----+--------------------+-----------+-------+---------------+---------+---------+------+----------+---------- ----------------+ | 1 | PRIMARY | smember_6 | Index | NULL | PRIMARY | 8 | NULL | 53431264 | The Using where; Using Index | | 2 | DEPENDENT subquery | Alluid | All | NULL | NULL | NULL | NULL | 2448 | Using where | --After optimization | Mysql> Explain select A.uid_,count (*) from smember_6 A,alluid b where a.uid_=b.uid_ group by uid_; +----+-------------+-------+------+---------------+---------+---------+------------+------+-------------------- -------------+ | ID | Select_type | Table | Type | Possible_keys | Key | Key_len | Ref | Rows | Extra | +----+-------------+-------+------+---------------+---------+---------+------------+------+-------------------- -------------+ | 1 | Simple | B | All | NULL | NULL | NULL | NULL | 2671 | Using temporary; Using Filesort | | 1 | Simple | A | Ref | PRIMARY | PRIMARY | 4 | ssc.b.uid_ | 1 | Using Index |
Optimization of Join
If your application has a lot of join queries, you should be sure that the fields of join in the two tables have been indexed. In this way, within MySQL, you will start a mechanism to optimize the SQL statements for your join.
Also, these fields that are used for joins should be of the same type. For example, if you were to join a DECIMAL field with an INT field, MySQL would not be able to use their index. For those string types, it is also necessary to have the same character set. (The character set of two tables may not be the same)
Optimization of tables
Use not NULL as much as possible
Unless you have a very special reason to use null values, you should always keep your fields not NULL.
Do not assume that NULL does not require space and that it requires extra space, and that your program will be more complex when you compare it.
Of course, this is not to say that you cannot use NULL, the reality is very complex, there will still be some cases, you need to use null values.
The following is excerpted from MySQL's own documentation:
"NULL columns require additional spaces in the row to record whether their values are NULL. For MyISAM tables, each NULL column takes one bit extra and rounded up to the nearest byte.
Explain analysis
Select ....
Variant:
1.explain Extended Select ....
"Decompile" The execution plan into a SELECT statement;
Run show warnings can get the statements optimized by the MySQL optimizer
2.explain Partitions Select ...
Explain for partitioned tables
Run result meaning:
Type:all, Index,range,ref,eq_ref,const,system null from left to right, worst to best;
All:full table Scan MySQL will traverse the entire table to find a matching row;
Index:index Scan; The difference between index and all is that index type only iterates through indexes;
Range: Index range scan, the scan of the index starts at a certain point, returns the row of the match value, common with between,<,> etc. query;
Ref: A non-unique index scan that returns all rows that match a single value, often found by using a unique prefix that is not a unique index;
Eq_ref: Uniqueness index Scan, for each index key, only one record in the table matches, often used for primary key or unique index scan;
Const,system: When MySQL optimizes a part of a query and turns it into a constant, these types of access are used. If you place a primary key in the where list, MySQL can convert the query to a constant.
Possible keys:
Indicates which index MySQL can use to find rows in the table, and if the query involves a field that has an index, the index is listed but not necessarily used by the query.
Key: Displays the index that MySQL actually uses in the query, and displays NULL if no index is used.
Key_len: Represents the number of bytes used in the index, which you can use to calculate the length of the index used in the query. The value shown by Key_len is the maximum length of the index field, not the actual length of use. That is, the Key_len is calculated on the basis of the table definition, not retrieved from the table.
Ref: Represents the connection matching criteria for the above table, that is, which columns or constants are used to look up values on the index;
Rows: Indicates the number of rows required to find the required records, based on table statistics and index selection;
Extra: Contains additional information that is not appropriate for display in other columns but is important
A:using Index: This value indicates that the overlay index is used in the Welcome Select operation (Cover index)
B:using Where: Indicates that the MySQL server storage engine is logged after "filtered";
C:using Temporary: Indicates that MySQL needs to use temporary tables to store result sets, often in sorting and grouping queries;
D:using Filesort:mysql cannot be sorted by index, called "File sort";
Explain limitations:
Explain will not tell you about triggers, information about stored procedures, or how user-defined functions affect queries.
Explain will not consider the cache.