Specific methods for optimizing MySQL Databases

Source: Internet
Author: User
The following articles mainly describe how to implement simple and practical MySQL database optimization methods, including how to regularly analyze and check tables and how to optimize tables on a regular basis, the following is a description of the specific solution. I hope it will help you in your future study. 1. Regular analysis table and checklist analysis table Syntax: Reference ANALYZ

The following articles mainly describe how to implement simple and practical MySQL database optimization methods, including how to regularly analyze and check tables and how to optimize tables on a regular basis, the following is a description of the specific solution. I hope it will help you in your future study. 1. Regular analysis table and checklist analysis table Syntax: Reference ANALYZ

The following articles mainly describe how to implement simple and practical MySQL database optimization methods, including how to regularly analyze and check tables and how to optimize tables on a regular basis, the following is a description of the specific solution. I hope it will help you in your future study.

1. Regular analysis table and checklist

The analysis table syntax is as follows:

Reference

 
 
  1. ANALYZE [LOCAL | NO_WRITE_TO_BINLOG] TABLE tb1_name[, tbl_name]...

The preceding statement is used to analyze and store the table's keyword distribution. The analysis result will allow the system to obtain accurate statistical information, so that the SQL statement can generate a correct execution plan. If the user feels that the actual execution plan is not the expected execution plan, executing an analysis table may solve the problem. During the analysis, a read lock is used to lock the table. This applies to MyISAM, DBD, and InnoDB tables.

For example, to analyze a data table

Reference

 
 
  1. analyze table table_name

The syntax of the checklist is as follows:

Reference

 
 
  1. CHECK TABLE tb1_name[,tbl_name]...[option]...option =
    {QUICK | FAST | MEDIUM | EXTENDED | CHANGED}

The purpose of the checklist is to CHECK whether one or more tables are incorrect. check table is useful for MyISAM and InnoDB tables. For MyISAM tables, keyword statistics are updated.

Check table can also CHECK whether the view has an error. For example, the referenced TABLE in the view definition does not exist.

2. Regular table Optimization

The table optimization syntax is as follows:

Reference

 
 
  1. OPTIMIZE [LOCAL | NO_WRITE_TO_BINLOG] TABLE tb1_name [,tbl_name]...

If you delete a majority of tables, or if you have made more changes to a table that contains variable-length rows (tables that contain VARCHAR, BLOB, or TEXT columns, the optimize table command should be used for TABLE optimization. This command can merge the space fragments in the TABLE and eliminate space waste caused by deletion or update. However, the optimize table command only applies to MyISAM, BDB, and InnoDB tables.

Example: optimize table table_name

Note: during execution of analyze, check, and optimize, the table is locked. Therefore, be sure to perform related operations when the MySQL database is not busy.

Common SQL Optimization

The SQL statements we often use during development are nothing more than INSERT and GROUPBY. How can we optimize these SQL statements?

1. Insert data in large batches

When using the load command to import data, appropriate settings can improve the import speed.
For tables of the MyISAM storage engine, you can import a large amount of data in the following way:

Reference

 
 
  1. ALTER TABLE tb1_name DISABLE KEYS;
  2. loading the data
  3. ALTER TABLE tb1_name ENABLE KEYS;

Disable keys and enable keys are used to ENABLE or DISABLE the update of non-unique indexes in the MyISAM table. When importing a large amount of data to a non-empty MyISAM table, you can improve the import efficiency by setting these two commands.
When a large amount of data is imported to an empty MyISAM table, the index is created only after the data is imported first by default. You do not need to set the index.

Reference

 
 
  1. load data infile '/home/mysql/text_txt' into table text

For InnoDB tables, this method cannot improve the efficiency of data import, but there are also several Optimization Methods for InnoDB tables.

1. Because InnoDB tables are saved in the order of primary keys, sorting imported data in the order of primary keys can effectively improve the efficiency of data import.

2. Run SET UNIQUE_CHECKS = 0 before the data is imported, disable the uniqueness check, and run SET UNIQUE_CHECKS = 1 after the import to restore the uniqueness check, which improves the import efficiency.

3. if the application uses the automatic submission method, we recommend that you execute set autocommit = 0 before import, disable automatic submission, and execute set autocommit = 1 after import. Enable automatic submission, it can also improve the import efficiency.

Optimize INSERT statements

When performing data INSERT, you can consider the following methods for optimization:

1. if you INSERT many rows from one customer at the same time, try to use the INSERT Statement of multiple value tables. This method will greatly shorten the link between the client and the MySQL database, close, and other consumption, the efficiency is faster than that of a single INSERT statement.

For example:

 
 
  1. insert into test values(1,2)
  2. insert into test values(3,4)
  3. insert into test values(5,6)


Change the preceding three sentences to insert into test values )......

2. If you INSERT many rows from different customers, you can use the insert delayed statement to get a higher speed.

The meaning of DELAYED is to let the INSERT statement be executed immediately. In fact, the data is put in the memory queue and is not actually written to the disk, which is much faster than inserting each statement separately; LOW_PRIORITY, on the contrary, is inserted only after all other users read the table.

3. Store index files and data files on different disks.

4. For batch insertion, you can increase the speed by adding the value of the bulk_insert_buffer_size variable. However, this can only be used for MyISAM tables.

5. When loading a table from a text file, use load data infile. This is usually about 20 times faster than using many insert statements.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.