MySQL Query optimization

Source: Internet
Author: User
Tags modifier modifiers mysql query optimization

The high_priority of select is still more useful, in practice, the average 5-6 seconds is increased to 3 seconds =================================================================== The ====== front section focuses primarily on how to make individual queries execute faster. MySQL also allows you to change the priority of statement scheduling, which allows queries from multiple clients to work better Together so that a single client does not wait for a long time due to a lock-in. Changing the priority also ensures that certain types of queries are processed faster. This section explains MySQL's default scheduling policy and the options that can be used to influence these policies. It also addresses the use of concurrent insert operations and the effect of storage engine lock hierarchies on client concurrency.  For the sake of discussion, we refer to the client that performs the search (select) as the "reader" and the client that performs the modification (DELETE, INSERT, replace, or update) as the "writer." MySQL's default scheduling policy can be summarized as follows: · Write operations take precedence over read operations. Write operations on a data table can occur only once at a time, and write requests are processed in the order in which they arrive.  Multiple read operations on a single data table can be performed simultaneously. The MyISAM and memory storage engines implement such scheduling strategies with the help of data table locks. When a client accesses a table, it must first acquire its lock. When the client completes the operation of the table, the lock is released. It is possible to explicitly acquire or release a lock through the lock tables and unlock tables statements, but in general, the server's lock manager automatically acquires the lock when needed and releases the lock when it is no longer needed.  The type of lock acquired depends on whether the client is writing or reading. A client that writes to a table must have exclusive (exclusive) access to the lock. During the operation, the data table is in an inconsistent (inconsistent) state because the indexes on the data table may need to be updated to match each other when the data record is deleted, added, or modified. This data table can cause problems if other clients are allowed to access it during the change process. It is very obvious that it is not good to allow two clients to write to a single data table at the same time, because such operations will quickly make the information in the data table a heap of useless garbage.  However, it is not good to allow the client to read the changes in the data table because the data in the location being read may be changing (modified) and the read result may not be true. A client that performs a read operation on a table must acquire a lock that prevents other clients from writing or altering the table during the read. But this lock is notExclusive access is required. The read operation does not alter the data, so there is no reason for a reader to block other readers from accessing the table.  So the read lock allows other clients to read the table at the same time. MySQL provides several statement modifiers that allow you to modify its scheduling policy: • The Low_priority keyword is applied to delete, INSERT, LOAD DATA, replace, and update. The High_priority keyword is applied to select and INSERT statements.  The delayed keyword is applied to the INSERT and replace statements. The low_priority and high_priority modifiers affect the storage engines (such as MyISAM and memory) that use data table locks.  The delayed modifier acts on the MyISAM and memory data tables. Changing the priority of statement scheduling the Low_priority keyword affects the execution schedule of delete, INSERT, LOAD DATA, replace, and UPDATE statements. Typically, when a data table is being read, if a write operation arrives, the writer waits for the reader to complete the operation (it cannot be interrupted after the query starts, so the reader is allowed to complete the operation). If the writer is waiting, another read operation arrives, and the read is blocked (block) because the default scheduling policy is that the writer takes precedence over the reader.   When the first reader completes the operation, the writer begins the operation, and the second reader begins the operation until the writer finishes the operation. If the write operation is a low_priority (low priority) request, then the system does not consider it to have a higher priority than the read operation. In this case, if the writer waits, the second reader arrives, allowing the second reader to be inserted before the writer. The writer is allowed to start operations only if there are no other readers. In theory, this scheduling modification implies that there may be situations where low_priority write operations are always blocked.  If the previous read operation has been in progress for other read operations, then the new request is inserted before the low_priority write operation. The high_priority (high priority) keyword for a select query is similar. It allows a select to insert a pending write operation, even if the write operation takes precedence over normal conditions.  Another effect is that high-priority select executes before the normal SELECT statement, because these statements are blocked by write operations. If you want all statements that support the Low_priority option to be handled by default at the low priority level,Use the--low-priority-updates option to start the server.  By using insert high_priority to increase the INSERT statement to a normal write priority, you can eliminate the effect of this option on a single INSERT statement. Use the deferred Insert action to apply the delayed modifier to the INSERT and replace statements. When the delayed insert operation arrives, the server puts the data row in a queue and immediately returns a status message to the client so that the client can continue operations before the data table is actually inserted into the record. If the reader reads data from the data table, the data in the queue is persisted until there is no reader. The server then begins inserting data rows in the deferred data row (Delayed-row) queue. At the same time as the insert operation, the server also checks to see if a new read request arrives and waits. If there is, the deferred data line queue is suspended, allowing the reader to continue the operation. When there is no reader, the server begins inserting the deferred data row again.  This process continues until the queue is empty. The sense that low_priority and delayed are similar, both allow data row insertions to be delayed, but their impact on client operations varies greatly. Low_ priority forces the client to wait until those rows of data can be inserted into the data table.  Delayed allows the client to continue operations, and the server buffers those rows of data in memory until it has time to process them. Insert delayed is useful when other clients may run long SELECT statements and you do not want to block and wait for the insert operation to complete.   When the client submits the insert delayed, it can be handled very quickly because the server simply queues the data rows to be inserted. However, you must also know some of the other differences between the normal insert and insert delayed behavior. If the Insert Delayed statement contains syntax errors, the client gets an error, but does not get some other information that can be used in normal situations. For example, when the statement returns, you cannot rely on (get) the Auto_increment (autogrow) value. Similarly, you cannot get the number of replicas for a unique index. This happens because the insert operation has returned the status information before it is actually executed. Another possible scenario is that data rows may be lost when the server crashes or exits with kill-9, because the data rows of the insert delayed statement are queued in memory (normally, the kill-term terminate command does not cause this situation, Because the server inserts data rows into the table before exiting。 Using concurrent insert operations The MYISAM storage engine has an exception rule that allows the reader to block the writer. This behavior occurs when there is no "hole" in the middle of the MyISAM data table (which may be the result of deleting or updating rows of data). When the data table does not have an "empty", any INSERT statement must add data rows at the end instead of the middle. In this case, MySQL allows other clients to add data rows to the data table while reading the data.  This is the concurrency insert operation because they occur at the same time, and the retrieval is not blocked. If you want to use concurrency inserts, be aware of the following: • Do not use the low_priority modifier in the INSERT statement. It causes the insert to be blocked frequently by the reader, thereby hindering the execution of the concurrency insert operation. If the reader needs to explicitly lock the data table to perform a concurrency insert operation, the lock TABLES should be used ... READ LOCAL, not lock TABLES ... READ. The local keyword acquires a lock that allows concurrency to continue because it can only be applied to data rows that are already in the data table and does not block new rows of data that are added to the end. • The LOAD data operation should use the concurrent modifier, which allows the SELECT statement on the data table to execute concurrently. The MyISAM data table that contains the "hole" in the middle cannot use the concurrency insert operation.  However, you can use the Optimize table statement to defragment the data table. Hierarchy and concurrency of locks the dispatch adjuster discussed earlier allows you to change the default scheduling policy.   Most of this is about the problem of using these modifiers to solve the lock on the data table hierarchy (Table-level), which is the problem that the MyISAM and memory storage engines use to manage data table contention. The BDB and InnoDB storage engines implement different levels of locks, so their performance characteristics and contention management are different. The BDB engine uses a page level (Page-level) lock.  The InnoDB engine uses a data row hierarchy (Row-level) lock, but only when necessary (in many cases, such as when the read operation is complete, InnoDB may not use the lock at all). The level of locks used by the storage engine has a significant impact on client concurrency. Assume that two clients want to update a row in a data table. Each client needs a write lock because of the update to be performed. For a MyISAM data table, the engine assigns a lock to the first client, which causes the second client to block until the first client completes the operation. For BDB data tables, it allows for greater concurrency: Two update operations are synchronized, unless two rows of data are on the same page.  In the InnoDB data table, concurrency is higher, and as long as two clients do not update the same row, two update operations can occur simultaneously. The general rule is that the finer the level of the lock, the better the concurrency, because as long as the client uses a different part of the data table, the client using the table can be more. It actually implies that different storage engines are suitable for different statement mixes (mixes): • MyISAM retrieval is very fast. But using a table-level lock can be a problem in a mixed retrieval and update environment, especially when the search tends to run for a long time. Under these conditions, updates may take a long time to proceed. BDB and InnoDB datasheets can provide better performance when there are many updates. Because the lock is in the page or data row hierarchy, the table is locked in a smaller range.   This reduces the contention for locks and increases concurrency. In the case of deadlock prevention (deadlock), table-level locks are more advantageous than subtle levels of locking. A deadlock does not occur when a table-level lock is used. The server can examine the required data tables by looking at the statements and lock them in advance. Deadlocks occur in the InnoDB and BDB data tables because these storage engines do not allocate all the necessary locks at the beginning of the transaction. Instead, it is allocated when a lock is detected during transaction processing. This can occur when two statements acquire a lock, and then try to acquire the lock further (multiple locks are required), but these locks are kept by the other side, waiting for the other to release. The result is that each client has a lock and also needs to take advantage of other client-owned locks to continue execution. This causes a deadlock and the server must terminate one of the transactions. How to use??? =========================================== priority Operation High_priorityhigh_priority can be used in select and insert operations to let MySQL know that This operation takes precedence. SELECT high_priority * from TABLE1; hysteresis operation Low_prioritylow_priority can be used in the insert and update operations to let MySQL know that this operation is lagging. Update low_priority table1 set field1= where field1= ... Delay inserting insert Delayedinsert DELAYED into table1 set field1= ... INSERT DELAYED into, is the client submits data to Mysql,mysql return OK state to the client. And that is not already the dataInsert tables, but store them in memory waiting to be queued. When MySQL is free, insert again. Another important benefit is that insertions from many clients are lumped together and written into a block. This is much faster than performing many separate inserts. The downside is that you can't return an auto-incrementing ID, and when the system crashes, MySQL hasn't had time to insert the data, and that data will be lost.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.