Mysql Note two
Mysql Note Two table of Contents
- 1. Preface
- 2. How Master Thread Works
- 2.1. Main loop (Loop)
- 2.2. Background tracking (Backgroup loop)
- 2.3. Refresh loop (flush loop)
- 2.4. Pause loop (suspend loop)
- 3. InnoDB Key Features
- 3.1. Inserting buffer (insert buffer)
- 3.2. Two writes (Double write)
- 3.3. Adaptive hash Index (Adaptive hash), enabled by default
- 3.4. Asynchronous io (async IO)
- 3.5. Refresh the adjacency page (Flush Neighbor page)
- 4. Mysql startup, shutdown, recovery
- 4.1. InnoDB _fast _shutdown
- 4.2. InnoDB _force Srecovery
1Objective2How Master Thread Works
Master thread is the highest-level thread that controls the management of memory
2.1Main loop (Loop)
Loop loops are the main loops, and most operations are done in this loop, with two operations: operations per second and operations every 10 seconds
- Operation per second
- Log buffering flushed to disk (always)
- Merge Insert buffer (possible)
- Refresh up to 100 dirty pages in the InnoDB buffer pool (possible)
- If there is currently no user activity, switch to Backgroup loop
- Operates every 10 seconds
- Refresh 100 dirty pages to disk (possible)
- Merge up to 5 insert buffers (always)
- Flush logs to disk (always)
- Delete useless undo page (always)
- Refresh 100 or 10 dirty pages to disk (always)
2.2Background tracking (Backgroup loop)
- Delete useless undo page (always)
- Merge 20 Insert buffers (always)
- Skip to main loop (always)
- Refresh 100 pages continuously until eligible (possible)
2.3Flush loop (flush loop)2.4Pause Loop (suspend loop)3InnoDB Key Features3.1Insertion buffer (insert buffer)
- Two conditions for using the insert buffer:
- Index is a secondary index (secondary index)
- The index is not unique (unique)
- Clustered index, nonclustered index
- For non-clustered indexes, the update is slower because it is to be discretized. So the action of insert buffer is:
- First, determine if the inserted nonclustered page is in the cache pool, and then insert it directly
- If not, put an insert Buffer first
- Insert buffer and Secondary page sub-node merge at a certain frequency
This usually allows multiple inserts to be combined into one operation, which greatly improves the performance of the nonclustered inserts by using commands to view:
Mysql> show engine InnoDB status\g; 1. Row ***************************type:innodbname:status: =====================================141027 21:38:14 INNODB MONITOR Output=====================================per second averages calculated from the last 8 Seconds-----------------.....-------------------------------------INSERT BUFFER and ADAPTIVE HASH INDEX-------------------------------------ibuf:size 1, free list len 0, seg size 2, 0 mergesmerged Operations:insert 0, D Elete Mark 0, delete 0discarded operations:insert 0, delete mark 0, delete 0Hash table size 276707, node heap has 0 buffer (s) 0.00 Hash searches/s, 0.00 Non-hash SEARCHES/S
3.2Write two times (Double write)
- Why Double write when the outage occurs, some pages are 16K, and may only write 8K, which results in a partial failure. Redo logs are for the page, and if the page is already dead, it doesn't make sense to redo it. So this happens, we have a copy of the page to restore the page, and then redo it. This is the double Write
- The double write schema double write consists of two parts, a double write buffer in memory, a size of 2M, and a portion of 2 extents of the shared tablespace on the physical disk, as well as 2M. When a dirty page of the buffer pool is refreshed, the disk is not written directly, but the dirty page is copied into the double write buffer in memory, then two times through Doublewrite buffer, each 1M sequentially into the physical disk of the shared tablespace, and then immediately synchronizes the disk. Observation command:
Mysql> show global status like ' innodb_dblwr% ' \g;*************************** 1. Row *************************** Variable_name:innodb_dblwr_pages_written value:0*************************** 2. Row *************************** variable_name:innodb_dblwr_writes value:0 2 rows in Set (0.00 sec)
3.3Adaptive Hash Index (Adaptive hash indexes), enabled by default
- The AHI InnoDB storage engine monitors queries against index pages on a table and establishes a hash index if a hash index is observed to result in a speed increase. AHI is constructed from the b= tree page of the buffer pool and is built quickly
- AHI Requirements
- The continuous access pattern for a page is the same. Access mode refers to the same query condition
- Accessed 100 times in this mode
- The page accesses n times through this mode, where the n= page records *1/16
3.4Asynchronous io (async IO)
There's nothing to say, the benefits of asynchrony.
3.5Refresh adjacency page (Flush Neighbor page)
When a dirty page is refreshed, all pages in the same area as the page are detected, and if it is a dirty page, brush together.
4Mysql startup, shutdown, recovery4.1InnoDB _fast _shutdown
When off, the parameter InnoDB _fast _shutdown affect behavior, the parameter is 0, 1, 2, the default is 1
- 0: When MySQL is off, to complete all full purge merge insert buffer, refresh all dirty pages. If you upgrade to MySQL, this parameter must be adjusted by 0
- 1: Default value, do not complete full purge and merge insert buffer, some data in the buffer pool dirty pages will still flush back to disk
- 2: Do not complete full purge, merge insert buffer, and do not flush the buffer pool, but instead write the log to the log file, the next time to start the recovery operation (recovery)
4.2InnoDB _force Srecovery
Impact recovery, default is 0
- 1: Ignore checked to corrupt page
- 2: Prevent master thread from running
- 3: Rollback operation without transaction
- 4: Merge operation without insert buffering
- 5: Do not view the undo log, install uncommitted transactions as committed
- 6: No roll-forward operation
Date: <2014-10-28 Tuesday >
Author:eqyun
Created:2014-10-28 Tuesday 13:25
Emacs 24.3.1 (ORG mode 8.2.10)
Validate
Mysql Note two