Database optimization instances and stories, database optimization instances
Brief:Every technology is inseparable from the database. The database is like a human brain. Once a problem (fault) occurs, pause, and the consequences of the card master can be imagined. I (fudomine) I have devoted myself to database Optimization for three months. Now I will share my experience with you. I hope to share my experience with you on learning and exploring PHP. If you have any suggestions or suggestions, let me know ;*~ *!
I. Back up data at a reasonable time
Back up dataIs the daily work of maintenance for each project. It is a process that facilitates data re-exploitation when the system is damaged or in other specific circumstances; the project backup time, most of them are determined based on project requirements, such as hourly and specified time;
Case: A project is in a large activity, and the database is in the queue state due to massive user access and scheduled backup; this results in peak bandwidth and a large number of operations, resulting in the database to stop working;
Method:
1. If the topology is single-structure (single database), stop the scheduled task (Backup );
2. If the topology is not a single structure (master-slave/distributed), you can stop the scheduled task; or the master (database) abandons the scheduled task and runs it from the (database;
Ii. Avoid large table operations
What is a big table,It refers to the large number of records, with a single mark exceeding 10 million rows. The table data file is huge, and the table data file exceeds 10 Gb;
(1) Slow Query
It is difficult to filter out the required data within a certain period of time, because: the source is low-> the differentiation is low-> a large number of disk IO-> the disk efficiency is reduced-> a large number of slow queries
(2) DDL operations on large tables
-It takes a long time to create an index. The process may result in master-slave latency, table locking, or even deadlock.
-Modifying the table structure requires a long time to lock the table. The process may cause: Master/Slave latency, table lock, and blocking.
Method:
1. database/table sharding
2. Data Archiving: reduce the influence of frontend and backend services
Iii. large transactions
Large transactions,Transactions that run for a long time and operate on a large amount of data
(1) Risks
* Locking too much data causes a lot of Blocking
* Rollback takes a long time
* The execution time is long, which may cause master-slave latency.
(2) Method
* Avoid processing too much data at a time (adding, deleting, modifying, and querying data)
* Remove unnecessary select operations in the transaction
Author: immobile peak
Source: http://www.cnblogs.com/mylly/
Copyright. You are welcome to repost the original article link :)