The following articles mainly introduce the most simple and practical Optimization Methods for MySQL databases, including the discussion on Performance of MySQLInnoDB and the performance optimization of MySQL databases. The following describes the specific content of this article, I hope it will help you in this regard. 1. Regular analysis table and checklist analysis table Syntax: Reference ANALYZE [L
The following articles mainly introduce the most simple and practical Optimization Methods for MySQL databases, including the discussion on MySQL InnoDB performance and MySQL database performance optimization. The following describes the specific content of the article, I hope it will help you in this regard. 1. Regular analysis table and checklist analysis table Syntax: Reference ANALYZE [L
The following articles mainly introduce the most simple and practical Optimization Methods for MySQL databases, including the discussion on MySQL InnoDB performance and MySQL database performance optimization. The following describes the specific content of the article, I hope it will help you in this regard.
1. Regular analysis table and checklist
The analysis table syntax is as follows:
Reference
- ANALYZE [LOCAL | NO_WRITE_TO_BINLOG] TABLE tb1_name[, tbl_name]...
The preceding statement is used to analyze and store the table's keyword distribution. The analysis result will allow the system to obtain accurate statistical information, so that the SQL statement can generate a correct execution plan. If the user feels that the actual execution plan is not the expected execution plan, executing an analysis table may solve the problem. During the analysis, a read lock is used to lock the table. This applies to MyISAM, DBD, and InnoDB tables.
For example, to analyze a data table
Reference
- analyze table table_name
The syntax of the checklist is as follows:
Reference
- CHECK TABLE tb1_name[,tbl_name]...[option]...option = {QUICK | FAST | MEDIUM | EXTENDED | CHANGED}
The purpose of the checklist is to CHECK whether one or more tables are incorrect. check table is useful for MyISAM and InnoDB tables. For MyISAM tables, keyword statistics are updated.
Check table can also CHECK whether the view has an error. For example, the referenced TABLE in the view definition does not exist.
2. Regular table Optimization
The table optimization syntax is as follows:
Reference
OPTIMIZE [LOCAL | NO_WRITE_TO_BINLOG] TABLE tb1_name [, tbl_name]...
If you delete a majority of tables, or if you have made more changes to a table that contains variable-length rows (tables that contain VARCHAR, BLOB, or TEXT columns, the optimize table command should be used for TABLE optimization. This command can merge the space fragments in the TABLE and eliminate space waste caused by deletion or update. However, the optimize table command only applies to MyISAM, BDB, and InnoDB tables.
Example: optimize table table_name
Note: during execution of analyze, check, and optimize, the table is locked. Therefore, be sure to perform related operations when the MySQL database is not busy.
Common SQL Optimization
The SQL statements we often use during development are nothing more than INSERT and GROUPBY. How can we optimize these SQL statements?
1. Insert data in large batches
When using the load command to import data, appropriate settings can improve the import speed.
For tables of the MyISAM storage engine, you can import a large amount of data in the following way:
Reference
- ALTER TABLE tb1_name DISABLE KEYS;
- loading the data
- ALTER TABLE tb1_name ENABLE KEYS;
Disable keys and enable keys are used to ENABLE or DISABLE the update of non-unique indexes in the MyISAM table. When importing a large amount of data to a non-empty MyISAM table, you can improve the import efficiency by setting these two commands.
When a large amount of data is imported to an empty MyISAM table, the index is created only after the data is imported first by default. You do not need to set the index.
Reference
- load data infile '/home/mysql/text_txt' into table text
For InnoDB tables, this method cannot improve the efficiency of data import, but there are also several Optimization Methods for InnoDB tables.
1. Because InnoDB tables are saved in the order of primary keys, sorting imported data in the order of primary keys can effectively improve the efficiency of data import.
2. Run SET UNIQUE_CHECKS = 0 before the data is imported, disable the uniqueness check, and run SET UNIQUE_CHECKS = 1 after the import to restore the uniqueness check, which improves the import efficiency.
3. if the application uses the automatic submission method, we recommend that you execute set autocommit = 0 before import, disable automatic submission, and execute set autocommit = 1 after import. Enable automatic submission, it can also improve the import efficiency.
Optimize INSERT statements
When performing data INSERT, you can consider the following methods for optimization:
1. if you INSERT many rows from one customer at the same time, try to use the INSERT Statement of multiple value tables. This method will greatly shorten the link between the client and the MySQL database, close, and other consumption, the efficiency is faster than that of a single INSERT statement.
For example:
- insert into test values(1,2)
- insert into test values(3,4)
- insert into test values(5,6)
Change the preceding three sentences to insert into test values )......
2. If you INSERT many rows from different customers, you can use the insert delayed statement to get a higher speed.
The meaning of DELAYED is to let the INSERT statement be executed immediately. In fact, the data is put in the memory queue and is not actually written to the disk, which is much faster than inserting each statement separately; LOW_PRIORITY, on the contrary, is inserted only after all other users read the table.
3. Store index files and data files on different disks.
4. For batch insertion, you can increase the speed by adding the value of the bulk_insert_buffer_size variable. However, this can only be used for MyISAM tables.
5. When loading a table from a text file, use load data infile. This is usually about 20 times faster than using many insert statements.