MySQL 8.0 version Update · Performance Optimization Chapter

Source: Internet
Author: User
Tags mutex percona percona server

Summary: This article summarizes some of the changes in MySQL in the 8.0 release and performance related, and with the release of the new iteration, this article will continue to be updated until formal GA. Updated version MySQL 8.0.0MySQL 8.0.0 wl#9387:innodb:group purging of rows by table ID This issue was first faceb ...

This article summarizes some of the changes that are related to MySQL in version 8.0 and performance, and with the release of the new iteration, this article will continue to be updated until formal GA.

MySQL 8.0.0

Wl#9387:innodb:group purging of rows by table ID

The problem was that one of the first Facebook engineers, Domas, Bug,innodb use multithreading to undo purge operations, but the policy of assigning undo is less reasonable and directly polling for allocations. This means that if you delete a large amount of data from a table, these n purge threads can generate a large number of index lock conflicts (such as index page merging and re-organization)

In wl#9387, when the parse undo log is stored by table_id, the records for the same table ID are assigned to the same thread at distribution. (Reference function Trx_purge_attach_undo_recs)

Of course, this also means that a reasonable not to create a conflict of single-table operations, can not be used to multithreading purge, but also a disadvantage.

Wl#8423:innodb:remove the buffer pool mutex

This is a desirable improvement, Percona contributed by the patch (bug#75534), mainly to the InnoDB buffer pool Mutex This large lock was split to reduce the lock conflict:

Allocate idle block (Buf_lru_get_free_block):

    • Get from free list: Buf_pool_t::free_list_mutex

    • Expel a free page from UNZIP_LRU/LRU, need Buf_pool_t::lru_list_mutex

Bulk Scan LRU (buf_do_lru_batch): Buf_pool_t::lru_list_mutex

Batch Scan Flush_list (buf_do_flush_list_batch): Buf_pool_t::flush_list_mutex

Dirty pages added to Flush_list (buf_flush_insert_into_flush_list): Buf_pool_t::flush_list_mutex

After the dirty page is written back to the disk, remove it from the Flush list (buf_flush_write_complete): Buf_pool_t::flush_state_mutex/flush_list_mutex

Expulsion from LRU page (buf_lru_free_page): Buf_pool_t::lru_list_mutex, and Buf_pool_t::free_list_mutex (buf_lru_block_free_non_ File_page)

Buf_flush_lru_list_batch uses mutex_enter_nowait to obtain a block lock, and if it fails, the description is being occupied by another session, ignoring the block.

Modifications of some variables are protected by Buf_pool_t::mutex, modified to be protected by memory Barrior (OS_RMB or OS_WMB), as shown in the following functions:

Btr_search_enable ()

Buf_resize_thread ()

Buf_get_withdraw_depth ()

Buf_lru_get_free_block ()

By splitting the lock, the competition of the global lock is reduced, and the buffer pool is expanded, this feature was actually found many years ago in Percona server, but it was not merged until the MySQL8.0 version.

Wl#7170:innodb buffer estimates for tables and indexes

It is primarily used to provide the optimizer with more accurate information about whether the data is present in disk or in memory, so that the optimizer can make a more accurate calculation of the cost.

Add a Global object (buf_stat_per_index_t) to manage all index page counts

To avoid introducing new global lock overhead, implement and use a LOCK-FREE hash structure ("INCLUDE/UT0LOCK_FREE_HASH.H") to store the index information, the key value is the index ID. (The index ID is currently unique, but does not preclude future changes).

Add Count:

1. Page just read from disk into memory (Buf_page_io_complete--Buf_page_monitor)

2. Create a new page (btr_page_create)

Decrement count: decrements when page is freed from LRU (buf_lru_block_remove_hashed)

Add a new Information_schema.innodb_cached_indexs to print the number of page per index in memory, with the following structure:

Mysql> Show CREATE TABLE Innodb_cached_indexes\g

1. Row ***************************

Table:innodb_cached_indexes

Create table:create Temporary Table ' innodb_cached_indexes ' (

' space_id ' int (one) unsigned not NULL DEFAULT ' 0 ',

' index_id ' bigint (+) unsigned not NULL DEFAULT ' 0 ',

' N_cached_pages ' bigint (+) unsigned not NULL DEFAULT ' 0 '

) Engine=memory DEFAULT Charset=utf8

1 row in Set (0.00 sec)

# # # and table name/Index Name Association

SELECT

Tables.name as TABLE_NAME,

Indexes.name as Index_name,

Cached.n_cached_pages as N_cached_pages

From

Information_schema.innodb_cached_indexes as Cached,

Information_schema.innodb_sys_indexes as indexes,

Information_schema.innodb_sys_tables as Tables

WHERE

cached.index_id = indexes.index_id

and

indexes.table_id = tables.table_id;

Related WORKLOG:WL#7168:API for estimates for what much of table and index data that's in memory buffer

Wl#9383:innodb:add an OPTION to TURN off/on DEADLOCK CHECKER

Add option to dynamically turn off deadlock detection, which has a significant effect on scenes such as hotspot updates, which have previously been specifically written in the blog, interested in pickup.

bug#77094

This optimization is derived from the contribution of Alisql, mainly to optimize the extensibility of the InnoDB redo, through the double buffer mechanism, allowing the log to disk, but also allow the MTR commit, specifically see I wrote this monthly report.

Wl#7093:optimizer provides InnoDB with a bigger buffer

In order to reduce the lock occupancy of the btree, InnoDB actually has a small cache buffer when it reads the data. For continuous record scanning, the INNODB uses row cache to read 8 consecutive records (and convert the recording format to the MySQL format) in the case of strict conditions, storing the row_prebuilt_t::fetch_cache of the thread private. This way, you can get multiple records at one time, and after the server layer finishes processing a record, you can fetch data directly from the cache without having to find the path again until the data in the cache is finished, and then the next round.

A new interface is introduced in wl#7093, because the optimizer can estimate the number of rows that might be read, so it can be provided to the storage engine for a more appropriate size row buffer to store the required data. The performance of a large volume of continuous data scanning will benefit from a larger record buffer.

The record buffer is automatically determined by the optimizer to open, add a new class Record_buffer to manage, the size of the record buffer is not more than 128KB, is currently hard code, can not be configured.

Determine and assign the record buffer function: Set_record_buffer, and pass the new API interface (Handler::ha_set_record_buffer) to the engine layer

The buffer itself is engine-independent, allocated at the sever layer, passed through the handler member M_record_buffer to the engine layer.

Add a new interface to determine if the record buffer is supported, currently only INNODB support, need to meet the following conditions (ref Set_record_buffer):

    • Access type is not ref, Ref_or_null, Index_merge, range, index, or all

    • Not a temp table.

    • Not loose index scan

    • Enter InnoDB engine layer judgment ((row_prebuilt_t::can_prefetch_records))

return Select_lock_type = = Lock_none//read-only query

&&!m_no_prefetch//Allow prefetch

&&!templ_contains_blob//No BLOBs, TEXT, JSON, geometry these big columns

&&!templ_contains_fixed_point//is not a spatial data type Data_point

&&!clust_index_was_generated//requires user-defined primary key or unique index (implicitly used as PK)

&&!used_in_handler//not accessed through HANDLER

&&!INNODB_API//not accessed through a similar InnoDB memcached

&& Template_type! = row_mysql_dummy_template//not check table

&&!in_fts_query; Not a full-text index query

In InnoDB, when the record buffer is configured, the record buffer provided by the server layer is used instead of the Row_prebuilt_t::fetch_cache

An introduction to this improvement by the official blog: http://mysqlserverteam.com/mysql-8-0-faster-batch-record-retrieval/

Wl#9250:split Lock_thd_list and Lock_thd_remove mutexes

The purpose of this worklog is to improve performance in short-connected scenarios, where the operation of the THD list can lead to higher lock contention.

The solution is also more traditional, is to partition, the linked list thd_list into multiple arrays, currently 8 partitions, the corresponding lock lock_thd_remove and lock_thd_list locks are also partitioned.

Transferred from: ZHAIWX_YINFENG Cloud Habitat Community

MySQL 8.0 version Update · Performance Optimization Chapter

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.