SQL Optimization in specific scenarios and SQL Optimization in specific scenarios
1. It is best to modify the data of a large table in batches.
Delete and update the 10 million rows record in the 1 million rows record table. Only 5000 rows of data are deleted or updated at a time. After each batch of processing is completed, the synchronization process is paused for several seconds.
2. Modify the table structure of a large table.
The field type of the columns in the table is modified, and the table is locked when the field width is changed, which cannot solve the problem of master-slave database latency.
Solution:
1. Create a new table.
2. Create a trigger on the old table to synchronize the old table data to the new table.
3. Synchronize the old table data to the new table.
4. Delete the old table.
5. Rename the new table as an old table.
You can run the following command to complete the above work:
Pt-online-schema-change-alter = "modify c varchar (150) not null default'' "-user = root-password = root d = sakia, t = table name-charset = utf8-execute
3. Optimize not in and <> queries
Example:
Select customer_id, firstname, email from customer where customer_id
Not in (select customer_id from payment)
The payment table will be queried multiple times. If the payment table has many records, the efficiency will be very low.
SQL statement after Rewriting
Select a. customer_id, a. firstname, a. email from customer a left join payment B
On a. customer_id = B. customer_id where B. customer_id is null;
4. Optimize Query of the summary table
Select count (1) from product_comment where product_id = 999;
Create a summary table:
Create table product_comment_cnt (product_id, cnt int );
Select sum (cnt) from (
Select cnt from product_comment_cnt where product_id = 999 union all
Select count (1) from product_comment where product_id = 999 and timestr> date (now ())
)
Update the summary table on a daily basis, plus the data of the current day.