Analyzes the consistency of Mysql transactions and data, and mysql consistency

Source: Internet
Author: User

Analyzes the consistency of Mysql transactions and data, and mysql consistency

This article analyzes the consistency of Mysql transactions and data through security, usage, concurrent processing, and so on. The following is all about it:

In our work, we often encounter this problem. We need to update the inventory. When we find that the available inventory is ready to be modified, other users may have modified the inventory data. As a result, the data we have queried may be faulty. Let's look at the solution below.

In InnoDB of MySQL, the preset Tansaction isolation level is repeatable read (REPEATABLE)

If you want to UPDATE the same form after SELECT, you 'd better use SELECT... UPDATE.

For example:

Assume that the product form products contains a quantity that stores the quantity of commodities. before the order is established, you must first determine whether the quantity of the quantity commodities is sufficient (quantity> 0) before updating the quantity to 1. The Code is as follows:

SELECT quantity FROM products WHERE id=3; UPDATE products SET quantity = 1 WHERE id=3;

Why is it insecure?

In a few cases, there may be no problem, but a large amount of data access may be wrong. If we need to deduct the inventory only when the quantity is greater than 0, assuming that the quantity read by the program in the first line of SELECT is 2, it seems that the number is correct, however, when MySQL is preparing to UPDATE, some people may have deducted the inventory to 0, but the program did not know how to UPDATE the wrong one. Therefore, the transaction mechanism must be used to ensure that the data read and submitted is correct.

So we can test the code in MySQL as follows:

SET AUTOCOMMIT=0; BEGIN WORK; SELECT quantity FROM products WHERE id=3 FOR UPDATE;

In this case, the data with id = 3 in products data is locked (note 3), and other transactions must wait for the transaction to be committed before execution.

SELECT * FROM products WHERE id=3 FOR UPDATE 

This ensures that the number read by quantity in other transactions is correct.

UPDATE products SET quantity = '1' WHERE id=3 ; COMMIT WORK;

Commit is written to the database, and products is unlocked.

Note 1: BEGIN/COMMIT is the start and end point of the transaction. You can use more than two MySQL Command windows to observe the locking status.

NOTE 2: onlySELECT ... FOR UPDATEOr lock in share mode, the same data will be executed only after other transactions are completed. Generally, SELECT... is not affected.

NOTE 3: Because InnoDB defaults to Row-level Lock, you can refer to this article for data column locking.

Note 4: Do not use the lock tables command for InnoDB forms. If you have to use the lock tables command, read the official instructions on InnoDB to avoid frequent system deadlocks.

More advanced usage

If we need to query the data first and then update the data, we 'd better use the statement as follows:

UPDATE products SET quantity = '1' WHERE id=3 AND quantity > 0;

In this way, you can process the transaction without adding it.

Mysql processes high concurrency to prevent overselling inventory

I have read a very good article.

Today, Mr. Wang gave us another lesson. In fact, mysql handles high concurrency and prevents oversold inventory. During the last year, Mr. Wang mentioned this. But unfortunately, even though everyone understood it at the time, they still had no idea about it in real development. Today, I have some understanding about this problem, and hope that there will be more such courses in the future.

First, we will describe the problem of oversold inventory: Generally, e-commerce websites will encounter activities such as group buying, flash sales, and special prices, A common feature of such an activity is the surge in traffic and the flash sales of tens of thousands or even tens of thousands of people. However, as an active product, the inventory must be very limited. It is a headache for many e-commerce website programmers to control the inventory and prevent overbought items to avoid unnecessary losses, this is also the most basic problem.

From the technical aspect, many people will certainly think of transactions, but transactions are necessary to control oversold inventory, but not adequate.

Example:

Total inventory: 4 items

Requester: a, 1 commodity B, 2 Commodity c, 3 commodity

The procedure is as follows:

BeginTranse (enable transaction) try {$ result = $ dbca-> query ('select amount from s_store where postID = 12345 '); if (result-> amount> 0) {// quantity indicates the quantity of inventory lost by the request $ dbca-> query ('Update s_store set amount = amount-quantity where postID = 100');} catch ($ e Exception) {rollBack (rollBack)} commit (commit transaction)

The above code is the code we usually write to control the inventory. Most people write this code. It seems that the problem is not big. In fact, there are huge vulnerabilities hidden. Database Access is actually access to disk files. Tables in the database are actually files stored on the disk, and even a file contains multiple tables. For example, due to high concurrency, three users a, B, and c enter the transaction, and a shared lock will be generated at this time. Therefore, during select, all three users have 4 inventory records. Note that the results of innodb are version controlled, if other users do not have a commit Update (that is, before a new version is generated), the current user still finds the result of the update;

Then there is update. If these three users reach the update here at the same time, the update statement serializes the concurrency, that is, to sort the three users at the same time, execute the statement one by one and generate the exclusive lock. Before the current update statement commit, other users wait for execution. After the commit statement, a new version is generated. After the execution, the inventory must be negative. However, based on the above description, the code will not be overbought after modification. The Code is as follows:

BeginTranse (enable transaction) try {// quantity indicates the number of inventories dropped by the request $ dbca-> query ('Update s_store set amount = amount-quantity where postID = 100 '); $ result = $ dbca-> query ('select amount from s_store where postID = 12345 '); if (result-> amount <0) {throw new Exception ('inventory insufficiency ');} catch ($ e Exception) {rollBack (rollBack)} commit (commit transaction)

In addition, the simpler method is as follows:

BeginTranse (start transaction) try {// quantity indicates the quantity of inventory lost by the request $ dbca-> query ('Update s_store set amount = amount-quantity where amount >= quantity and postID = 100 ');} catch ($ e Exception) {rollBack (rollBack)} commit (commit transaction)

1. In the case of second kill, it is certainly not possible to read and write databases at such a high frequency, which will seriously cause performance problems.

You must use the cache to put the product that requires second kill into the cache and use the lock to handle its concurrency. When the user receives the second kill and submits the order, the number of items will decrease (Lock/unlock) before other processing, processing failed: Increase data by 1 (Lock/unlock); otherwise, the transaction is successful.
When the number of items decreases to 0, the product is killed in seconds and other users' requests are rejected.

2. The database cannot be operated directly. Writing a database to a read-only database requires too much pressure on the database and must be cached.

Put the 10 items you want to sell in the cache. Then, set a counter in memcache to record the number of requests. You can use this request to count the number of items you want to sell in seconds, for example, if you want to sell 10 items, only 100 requests are allowed. When the counter reaches 100, the second kill is displayed later, which reduces the pressure on your server. Then, according to the 100 requests, the first payment, first payment, and then payment prompts that the product is completed in seconds.

3. First, when multiple users modify the same record concurrently, the user who submits the record later will overwrite the result submitted by the former.

This can be solved directly using the locking mechanism, optimistic lock or pessimistic lock.

Optimistic lock: It is to design a version number field in the database and make it + 1 for each modification. In this way, you can check the version number before submission to see if it has been submitted concurrently, however, the disadvantage is that it can only be controlled in the application. If there is a cross-application optimistic lock that modifies the same data, there is no way to do it. In this case, you can consider pessimistic locks.

Pessimistic lock: locking data directly at the database level, similar to the use in oralceselect xxxxx from xxxx where xx=xx for updateIn this way, other threads cannot submit data.

In addition to locking, you can also use the lock receiving method. The idea is to design a status flag in the database. Before you modify the data, you can mark the status flag as being edited, in this way, when another user wants to edit this record, the system will find that another user is editing the record, and then reject the request for editing the record, similar to if you are executing a file in the operating system, when you want to modify the file, the system will remind you that the file cannot be edited or deleted.

4. It is not recommended to lock the database. We recommend that you use the Server Memory Lock (lock the primary key ). When a user wants to modify the data of an id, the id to be modified is stored in memcache. If other users trigger the data to modify this id, when reading that memcache has this id value, it will stop that user from modifying.

5. In actual application, mysql is not allowed to face up to large concurrent reads and writes. It will use "External Force ", for example, cache, use the Master/Slave database for read/write splitting, table sharding, and queue writing to reduce concurrent read/write.

Pessimistic lock and optimistic lock

First, when multiple users modify the same record concurrently, the user who submits the record later will overwrite the result submitted by the former. This can be solved directly using the locking mechanism, optimistic lock or pessimistic lock.

The Pessimistic Lock, as its name implies, is very Pessimistic. Every time you get the data, you think someone else will modify it. So every time you get the data, you will Lock it, in this way, others will block the data until it gets the lock. In traditional relational databases, many of these locks are used, such as row locks, table locks, read locks, and write locks.

Optimistic Lock, as its name implies, is Optimistic. Every time you get the data, you think that others will not modify it, so they will not Lock it, however, during the update process, the system determines whether someone else updates the data during this period. You can use the version number and other mechanisms. Optimistic locks apply to multiple read application types, which can improve throughput. For example, if a database provides a mechanism similar to write_condition, optimistic locks are actually provided.

The two locks have their own advantages and disadvantages and cannot simply define which one is better than the other. Optimistic locks are suitable for scenarios with relatively few data modifications and frequent reads. Even if a small number of conflicts occur, this saves a lot of lock overhead and improves the system throughput. However, if there are frequent conflicts (when there are too many data writes), the upper-layer applications do not constantly retry, which reduces the performance. It is more appropriate to use pessimistic locks in this case.

Practice

Modify the amount of the table and open two command line windows.

First window;

SET AUTOCOMMIT=0; BEGIN WORK; SELECT * FROM order_tbl WHERE order_id='124' FOR UPDATE;

The second window B:

# UPDATE the inventory quantity of order ID 124 UPDATE 'order _ tbl' SET amount = 1 WHERE order_id = 124;

We can see that window a adds a transaction and locks this data. This problem occurs when window B is executed:

Complete commit in the first window:

SET AUTOCOMMIT=0; BEGIN WORK; SELECT * FROM order_tbl WHERE order_id='124' FOR UPDATE;UPDATE `order_tbl` SET amount = 10 WHERE order_id = 124;COMMIT WORK;

Mysql processes high concurrency and prevents oversales of inventory. The above is all the content of this article. If you still do not understand it, leave a message below to discuss it.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.