Improvement and application of MySQL Online DDL

Source: Internet
Author: User
Tags create index prepare cpu usage

This paper briefly analyzes the implementation principle of the online DDL and the considerations of the use process.     any DDL operation, the performer needs to pre-test or clearly understand whether this operation will affect the database within the scope of the business period of the database, especially for large tables in the DDL operation, need to pay close attention to the server's IO, Memory and CPU usage (each DBA has a history of the tears of the big table's DDL statements). If reproduced, please indicate the source of the blog: www.cnblogs.com/xinysu/, copyright belongs to the blog Garden su Home radish all. Hope you support! 1 Early DDL implementation Principles ( before 5.6.7 )InnoDB early support for the execution of DDL statements by copy table and InPlace, which works as follows:
    • Copy table mode
      • Create a new temporary table that is consistent with the original table and execute the DDL statement on the temporary table
      • Locks the original table, does not allow DML, allows querying
      • Line-by-row data is copied from the original table to the staging table (this process is not sorted)
      • After the copy is finished, the original table is forbidden, that is, the original table does not provide read/write service at this time
      • Perform a rename operation to complete the DDL process
    • InPlace Mode (Fast index creation, only for index creation and deletion)
      • New frm temp file
      • Locks the original table, does not allow DML, allows querying
      • Query the data in the order of the clustered index, locate the required index column data, and then insert it into the new index page
      • The original table is forbidden to read operations, that is, the original table does not provide read and write services
      • Perform a rename operation, replace the frm file , complete the DDL process

InPlace on the basis of the copy table made a large improvement, it is not necessary to copy the entire table, just on the original IBD file, create the required index page, this process than copy table save a lot of Io resource consumption and DDL SQL execution speed greatly improved, Reduces the length of time that the table does not provide write services. However, InPlace only supports the creation of indexes on deletions, does not support other DDL operations, and other DDL operations are still performed by copy table.

For an online business database, either copy table or inplace, there is still a glaring drawback: there is no write service involved in the table during operation! It is not possible to refer to the table-to-bottom insert,update,delete operation, only select is supported. 2 Online DDL implementation Principles    When the table has DDL operations, it may appear that the table is inaccessible for several minutes or even hours, performance and response anomalies, in order to effectively improve the situation, MySQL launched the online DDL in version 5.6.7. (This article refers to the official website 5.7 version of the document collation and testing).      In the online DDL, there are two ways of copy and InPlace, and for DDL SQL that does not support the online DDL, copy is used, and for DDL SQL that supports online DDL, Then adopt the InPlace way, here the inplace is divided into 2 categories: whether the need to rebuild table, the criterion is: whether to modify the row record format. If you modify the row record format, you need to rebuild the table, such as modify the column type, add or subtract columns, etc., if you do not modify the row record format, only modify the table's metadata, you do not need to rebuild table, only modify metadata metadata, such as delete index, set default value and rename column name, etc. In detail, see ' Part 4 ' for specific grammatical conditions.      So what's the implementation principle inside the new online DDL? (See here: http://blog.itpub.net/22664653/viewspace-2056953/)     There are 3 stages: Prepare, execute, commit.
    • PREPARE
      • Create a new temporary frm file
      • hold EXCLUSIVE_MDL lock, prohibit read and write
      • determines the execution mode based on the alter type (COPY,REBUILD,NO -rebuild)
      • Update the memory object of the data dictionary
      • to allocate increments of Row_log object records if required rebuild
      • generate new temporary IBD file if required rebuild
      • EXECUTE
        • If you are modifying metadata only:
          • This part does not work
        • others:
          • Lower EXCLUSIVE-MDL lock, allow read/write (copy not allowed)
          • records incremental row-log generated during DDL execution (only rebuild type required)
          • scans the clustered index of old_table for each record re Cord
          • iterates through the new table's clustered index and level two index, processing
          • the corresponding index entry based on the record,
          • inserting the construction index entry into the Sort_buffer block
          • Rt_buffer Block Inserts a new index
          • applies the action in Row-log to the new temporary table, to the last block
      • COMMIT
          upgrade to EXECLUSIVE-MDL Lock, disable read-write
        • Redo last part of Row_log increment
        • update innodb data dictionary table
        • Commit TRANSACTION, write redo log
        • Modify Statistics
        • Rename temporary ibd files, frm files
        • DDL completion
Note here that the Row-log, which is to record the DDL in the execution of the table data changes in the operation, so that the execution of the DDL table concurrency, in the excute phase can be normal to provide write service, no congestion, and finally apply the Row-log to the new table. When testing on version 5.7.17, it was found that DDL supported InPlace and required rebuild, during DDL, if the data is modified, is flushed directly to the original IDB file, in the test environment, to the large table testddl delete a field, the process of insert 100w Line records, you can see the original IBD file growth of about 1G. Here's a question, not sure: Row-log should not be a modified format for recording row Records because it is too inefficient to initially speculate that only the primary key should be recorded and then be applied to the new table based on the primary key lookup.  The Online DDL can effectively improve the database impact during DDL:
    • During the Online DDL, the query and DML operations can be performed normally in most cases, and the lock time of the table will be greatly reduced, so as to ensure the scalability of the database;
    • Allow In-place to operate the DDL, avoid rebuilding the table to consume excessive disk IO and CPU resources, reduce the overall load on the database, so that during the DDL, the database can maintain high performance and high throughput;
    • The DDL that allows the in-place operation consumes buffer pool less than the copy-to-temp file, avoiding a temporary drop in performance over the previous DDL process, since it was previously necessary to copy the data to the staging table, which would take up the buffer pool, A portion of the data that is frequently accessed in memory is cleaned up.
3 Online DDL involves parameters and options 3.1 innodb_online_alter_log_max_sizeWhen DML occurs in the online DDL process, data modifications are recorded to Row-log, while the size of the Row-log is set by Innodb_online_alter_log_max_size, which defaults to 128M, when the table is large and the operation is frequent. To do the DDL process, you can adjust this parameter to avoid a 1799 error: 3.2 Online DDL syntax
    • Alter table ...., algorithm [=] {default| inplace| COPY}, LOCK [=] {default| none| shared| EXCLUSIVE}
3.3 Lock OptionThis option is used to adjust the way DDL is locked, with a total of 4 options.
    • LOCK=EXCLUSIVE
      • An exclusive lock (X Lock) is added to the entire table, and the query and modify operations are not allowed
    • LOCK=SHARED
      • Add (S Lock) to the entire table, allow query operations, but do not support data change operations
    • LOCK=NONE
      • Does not add locks, allowing both query operations and database change operations, which are best concurrency in this mode
    • LOCK=DEFAULT
      • This option is the default when lock is not specified
      • Minimum lock-up to support queries and 0DML operations, depending on the type of DDL operation
      • First, determine whether the current operation can use the None mode, if not, determine whether the shared mode can be used, if not, determine whether you can use theEXCLUSIVE模式
3.4 ALGORITHM选项 The impact of DDL on database performance is largely affected by how it is manipulated, such as whether to allow In-place, whether to request a copy operation, and whether to rebuild the entire table. Like what a table, modify or add the default value, does not affect the data inside the table, so 1s can be completed; Add 1 indexes, need dozens of seconds, should be added to the index data page and modify the frm file, but not rebuild tabular data, and the data type of the modified column is, it may take a few minutes or more time,     Because it needs to re-rebuild the entire table, the Cpu,io and buffer pool applications are heavily requested during execution. DDL-induced Inplace,copy,rebuild can be selected by specifying algorithm (note that not all DDL support In-place, see part 4th)
    • ALGORITHM=INPLACE
    • ALGORITHM=COPY
Of these two options, InPlace is better than copy performance because inplace neither records undo log nor writes redo log, while providing DML operations during execution. 4 Online DDL support syntax conditionThe online DDL has different execution rules for different DDL statements, and the following table describes in detail the support for the online DDL for each syntax. Column Description:
    • In-place?
      • Description: Whether to supportALGORITHM=INPLACE 
    • Rebuilds Table?
      • Description: Whether the table will be rebuilt
      • There are two ways to rebuild a table: InPlace and copy (modifying or copying to a temporary file in situ)
      • If supported ALGORITHM=INPLACE , then modify the INPLACE(light yellow mark) in situ
      • If not supported ALGORITHM=INPLACE , then copy, copy to temporary file modification, and do not support update DELETE insert operation (sepia tag)
    • Permits Concurrent DM
      • Description: Supports concurrent operation of DML SQL for the table during DDL
      • DML operations are not supported when adding spatial indexes and full-text indexes
      • When allowed, you can control whether you want to provide a query or modify the operation by using the lock option
      • Lock=none, support query and update INSERT delete operation
      • Lock=shared, only queries are supported
      • Only modifies Metadata?
      • Whether to modify metadata only

5 Test history 5.1 4 Typical DDL operations analysisFor the support of inplace, whether it is necessary to rebuild and whether to modify only metadata to classify, select each kind of a DDL SQL to test, see: Consider the length of the varchar change, here to add more this. 5.1.1 DDL test content
    • Test DB Environment: Table name TBDDL, table size: 1G, 500W row record
    • Test flow: Open transaction query, do not submit \ = Execute DDL + Submit Query Transaction * * Execute DML + open transaction, execute DML do not submit + submit DML
    • Test DDL SQL
      • ALTER TABLE tbddl MODIFY COLUMN ItemId VARCHAR (20);
      • ALTER TABLE tbddl ADD xinysu int;
      • CREATE INDEX ix_proid on TBDDL (ProviderID);
      • Alter TABLE TBDDL ALTER COLUMN XINYSU SET DEFAULT 123456;
      • Alter TABLE TBDDL ALTER COLUMN ItemId VARCHAR (50); #UTF8字符集, 3 bytes One character, 50 characters is 150 bytes, less than 256bytes
      • Alter TABLE TBDDL ALTER COLUMN ItemId VARCHAR (100); #UTF8字符集, 3 bytes One character, 100 characters is 300 bytes, greater than 256bytes
    • Test the focus point
      • Start and close old_alter_table
      • What is the lock in the prepare,commit phase?
      • What is the lock in the Excute phase?
      • Performance of the server during execution (Zabbix monitoring)
      • Concurrency of the database during execution (sysbench pressure measurement)
5.1.2 DDL Test ConclusionThe test process, not described herein, directly pasted test results, interested in the cheese, can be self-testing. varchar is stored by character, each character is calculated by character set, UTF8 is 3 bytes one character, when the number of varchar bytes <256byte, only 1 byte is required to store the actual length, when the number of varchar bytes >=256, You need 2 bytes to store the actual length. For example, varchar (10) under the UTF8 character set, assuming that n (0<=n<=10) is stored, it occupies a number of bytes: varchar (100) under the N*3+1;utf8 character set, assuming that n (0<=n<=100) is stored,    The number of bytes it occupies is: n*3+2. With this understanding, you can understand the way the DDL is processed by increasing or shortening the length of the column, assuming that the column varchar (M) needs to grow or shrink to VARCHAR (N), the character set is UTF8:
    • When 3m<256,3n<256, the storage length byte does not need to change, all is 1, then does not need to change the row record, only needs to modify the meta-data;
    • When 3m>256,3n>256, the storage length byte does not need to change, all is 2, then does not need to change the row record, only needs to modify the meta-data;
    • When 3m<256,3n>256, the storage length of the byte needs to change, from 1 to 2, you need to change the row record, the Online DDL using copy table mode;
    • When 3m>256,3n>256, storage-length bytes need to change from 2 to 1, you need to change the row record, and the Online DDL uses the Copy table method

5.2 Multiple DDL processing with tablesBefore the online DDL, all DDL statements in the same table are routinely merged into one SQL statement, avoiding the drawbacks of repeated rebuild, multiple locks, and no increase in DML duration. However, the introduction of the online DDL requires a 2-point change:
    • In addition to individual DDL statements that do not support inplace, other DDL statements are not X-locked during execution, that is, the table still provides DML operations
    • Lock granularity, in the same DDL statement, with the highest level of lock handling
    • Ease of maintenance
It is recommended to deal with Class 3 ( personal recommendations after testing, for informational purposes only), see.
    • Why does copy table come out alone?
      • Because DML operations are not allowed in this class of operations, it is recommended that this class of compositions be executed in a single DDL SQL and not merged with Iplace's DDL SQL;
    • Why Iplace to be divided into 2 categories?
      • Easy Maintenance
      • Metadata-only modified DDL is executed faster, to facilitate administrative maintenance, not all SQL paste a heap, only the metadata modified DDL statements belong to a class
      • Need to rebuild to avoid duplication of rebuild, waste disk IO and CPU resources.
        For example, now on-line items, you need to change the table tbddl,1 fields from int to varchar, add 3 fields, 2 indexes, 2 default values, 2 columns grow in length, separate SQL is:  alter table tbddl alter column ItemId varchar;  alter table TBDDL ADD  su int; ALTER TABLE tbddl ADD  xin varchar (40); ALTER TABLE tbddl ADD  yu int; create INDEX ix_su on TBDDL (SU); CREATE INDEX Ix_yu on Tbddl (Yu);  alter TABLE tbddl ALTER COLUMN createdbyid SET DEFAULT 123456; ALTER TABLE TBDDL ALTER COLUMN ItemID SET DEFAULT 654321; alter TABLE tbddl ALTER column Createdbyname VARCHAR (70); Alter TABLE TBDDL ALTER COLUMN modifiedbyname VARCHAR;   Test recommends the following:  alter TABLE tbddl alter column It EmId varchar (20); ALTER TABLE tbddl Add su int, add xin varchar (+),add  yu int,alter COLUMN modifiedbyname varchar (+), add index Ix_ SU (SU), add index Ix_yu (Yu); Alter TABLE TBDDL ALTER COLUMN Createdbyid set default 123456,alter column ItemID set default 654321,alter column Createdb Yname VARCHAR (70); 5.3 Database Performance exception handling during DDL executionDuring the execution of the DDL, you need to pay close attention to the database server CPU and IO situation, view the database connection pool, slow query situation, if an exception occurred during the period, what should be handled? Suppose now to add a column to the big table TBDDL, the new process, found to affect the online business, the need for emergency stop, can be done by the following steps:
    • Show Processlist;
    • Kill process ID;
See Specific. 5.4 Database outage during DDL executionDuring DDL, what happens to the recovery start of the database in the event of a downtime? Does the temp file still exist? Does an incomplete DDL operation be performed automatically during the recovery process? If so, how is it handled?    If not, will it affect the manual creation again? On the 5.7.17 Release, a 4 class of DDL SQL was tested, and when the DDL was executed, the database was down, the DDL did not affect the recovery start of the database, and the incomplete DDL statement did not go back to automatic execution, because the downtime process was too late to clean up the temporary files, so after the database recovery, Temporary files still exist. DDL does not commit, it is assumed that the database data dictionary and table metadata has not been modified, and then manually execute the DDL statement, and will not report the conflict. (This part of the analysis of some of the blog, the test version is 5.7.17) test process, here do not do too much description, directly affixed to the conclusion, interested cheese can self-test, welcome to discuss. the influence of 5.5 DDL on master-slave    DDL period, it is assumed that the time required to execute the SQL takes 10h, the time to remove waitting metadata lock, rebuild or inplace takes 5 hours, then the slave library is a single-threaded SQL Thread Application relay log situation, you need to consider the impact from the library lag.     DDL in the main library execution, because the DDL statement is not committed, so it will not be synchronized to the library, from the library can normally synchronize other data modification operations, this link is not a problem, but when the DDL in the main library submitted, the Binlog log through the Io_ Thread to the relay log from the library, from the library Sql_thread is a single-threaded operation, the application relay LOG, at least 5 hours, that is, 5 hours to execute relay log, unable to synchronize the main library generated in a few hours bin LOG, then , there is a serious lag from the library, whether the problem is within acceptable scope and needs to be included in the impact of DDL execution.     If you can't accept the lag from the library, what can be done?     You can use this idea to initiate parallel replication from the library. To initiate parallel replication, there are several issues to note:
    • Use note
      • In the case of serious backwardness from the library, the parameter can be turned on for multi-threaded parallel execution
      • In a low-volume database, it is not recommended to turn on, but the performance from the library will be worse than the drag
    • Configuration note
      • Note that the Master_info_repository relay_log_info_repository is set to table, which is written mater_info.log and relay_info.log by default. Brush the frequency of these two files to bring the performance impact is relatively large, according to Kang Jiang teacher pressure test, performance difference between 20-50%
      • Slave_parallel_workers recommended set to from library core number
      • Slave_parallel_type
        • Database, transaction of different libraries, triggering parallel playback from library
        • Logical_clock, the group commits the transaction, submits the settings according to the group, plays back from the library in parallel, and if it is to improve the latency of the DDL, this configuration should be used.
6 Online DDL Considerations
  • Disk space
    • Rebuild, DataDir space is enough
      • Because you copy IBD files, make sure you have enough space
    • Rebuild, Innodb_online_alter_log_max_size is enough.
      • In the rebuild process, the resulting DML involves the row record change log, whether it is sufficient to store
    • When InPlace, consider whether the tmpdir space is sufficient
  • Whether DDL is acceptable for the effect of latency from the library
    • In the process of the main library online DDL, because there is no commit, other concurrency operations can be synchronized normally to the slave library
    • After the main library commit, the DDL is synchronized to the slave library
    • Since the library is a single-threaded execution of sql_thread, assuming that the DDL execution process takes 1 hours, the library will lag 1 hours +
    • Whether to allow lag from the library, if not allowed, can be optimized for processing by parallel replication
  • Does row-log check for duplicate values or modify conflicts?
    • is checked against the primary key and the unique constraint.
  • Copy table, how to pause DDL operations under InPlace
    • Show full processlist;
    • Kill ID; # (DDL SQL ID number)
    • Once this is done, the DDL can still be executed again, there will be no conflicts, and the temporary IDB and frm files created will be automatically deleted.
  • Copy table, inplace down
    • In both cases, a DDL statement that is not completed will not continue after the outage
    • However, its generated FRM and IDB temporary files will not be deleted, can be manually deleted, or not manually deleted, even if not deleted, and will not affect the execution of the DDL again
    • But after the MySQL service is recommended, delete the unused temporary files
  • Multiple DDL statements in the same table, do not execute one at a
    • Please follow the support inplace and whether you need rebuild classification merge execution
  • How to view DDL progress (not resolved)
    • If there is a rebuild, it is evaluated by the growth of IBD files, but if it is inplace, how to check it? Is there any better way to view? Does Performance_schema provide the appropriate query method?

Improvement and application of MySQL Online DDL

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.