Background Process Summary

Source: Internet
Author: User

There are 15 default background processes. 1.master thread (1). 2.IO thread (1) read/write thread (8, default 4) (2) Insert buffer thread (1) (3) Log IO thread (1) 3.lock Monitor Thread (1) 4.error monitor thread (1) 5.purge thread (1) 6.purge Cleaner (flushing) thread (1) 7. MySQL 5.6, Master thread's work has been greatly reduced, purge,page clean and so on as a separate thread. Background ProcessThreads with 1.master thread (main thread) have the highest priority. 2. Several loops within it: Main loop (loop), background loop (background loop), refresh loop (flush loop), pause Loop (suspend loop). 3. The Loop,backgroup Loop,flush Loop and suspend loop are toggled based on the state of the data running. 4.loop is called a live loop, because most operations are in this loop. The 5.loop loop is implemented by thread sleep, which means that the so-called one or 10-second operation per second is imprecise. 6. There may be delays in the event of a heavy load. Master Thread1. What to do every second (1) Refresh dirty page to disk. (2) Execute Insert buffer merge. (3) Brush redo log buffer to disk. (4) checkpoint. (5) Check the Dict table cache to determine if you want to delete the table cache object. 2. What to do every 10 seconds (1) Refresh dirty page to disk. (2) Execute Insert buffer merge. (3) Swipe redo log buffer to disk (4) Undo purge. (5) checkpoint. 3. When the instance is off (1) Brush redo log to disk. (2) Insert buffer merge. (3) Brush redo log buffer to disk. (4) Execution of checkpoint. Master Thread Optimization recommendations1. Avoid dirty page stacking and adjust innodb_max_dirty_pages_pct (<=50) appropriately. 2. Avoid undo build up and adjust innodb_max_pruge_lag/innodb_max_purge_lap_delay/innodb_purge_batch_size. 3. Adjust the checkpoint and adjust the innodb_flush_flush_log_at_trx_commit/innodb_adaptive_flushing/innodb_adaptive_flushing_lwm/in time. Innodb_flush_neighbors/innodb_flusing_avg_loops. 4. Keep the transaction continuous and smooth, do not flash large transaction, or high frequency small transaction. Checkpoint1. Periodically confirm redo log landing, avoid data loss, and improve crash recover efficiency. 2.buffer pool dirty data too much, brush dirty pages to disk, free memory. 3.redo log is running out, brush the dirty page to disk. Checkpoint is required to perform a 4.redo log switchover. 5.sharp Checkpoint (1) flushes all dirty pages back to disk. (2) The system hang when the refresh. (3) Compare violence, only need Innodb_fash_shutdown=06.fuzzy checkpoint (1) to keep dirty pages flushed back to disk only when a clean restart is required. (2) Less impact on the system, but may be slow to refresh, there will be hysteresis. (3) innodb_max_dirty_pages_pct = (4) innodb_max_dirty_pages_pct_lwm =0 Purge1. Simply put, is to do GC (garbage collection). What do 2.purge do? (1) Delete records that do not exist in the secondary index. (2) Delete the record that has been hit by the delete-marked tag. (3) Delete the Undo log that is no longer needed. 3. Starting with 5.6, separate the purge thread (1) Innodb_purge_threads=1 (2) innodb_max_purge_lag=0 (3) Innodb_purge_batch_ size-3004. Case: After deleting large amounts of old data, Statistics min (pkid) is slow. Insert Buffer/change Buffer1. Increase I/O efficiency by randomly changing the IUD operation on a non-unique secondary index from random to sequential I/O. 2. Working mechanism: (1) first determine if the inserted nonclustered index page is in the buffer pool, and if so, insert it directly. (2) If not, first put into a change buffer object. (change buffer is also a class B + tree.) Cache up to 2K of records each time) (3) When the secondary index is read to the buffer pool, the records of that page in the insert buffer are merged into the secondary index page. 3.innodb_change_buffer_max_size4.innodb_change_buffering (1) Fast shutdown does not merge with insert buffer. (2) The TPS is affected when inserting buffer is inserted when it is merged. (3) Insert buffer occupies part of buffer pool, if the secondary index is not many, you can consider turning down or lowering the insert buffer. Double write , dual writing1. Purpose/role: To ensure the reliability of data write (to prevent data page corruption and repair) 2. Because InnoDB has a partial write problem. (1) Crash occurs when the 16k page writes only part of the data. (2) A logical operation is recorded in the redo, not a physical block and cannot be recovered by redo log. 3. How to resolve the partial write problem (1) Double write. (2) 2 1M space, total 2M (both disk files and memory space). (3) The page is written to Doublewrite buffer in the first order when it is refreshed. (4) and then flush back to disk. 4. Under a hardware device or file system that can guarantee atomic writes, it can be turned off. Can also be turned off on 5.slave. 6.double write-in sequence, the performance penalty is small (the loss on the SSD device is relatively large). Starting at 7.MySQL 5.7, the PCIe SSD device will automatically determine if you want to turn off double write buffer.

Background Process Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.