Application of multi-version concurrency control (MVCC) in real project

Source: Internet
Author: User
Tags value store



The recent project has encountered a concurrency control problem in a distributed system. The problem can be abstracted as: a distributed system consists of a data center D and a number of business processing centers l1,l2 ... LN is essentially a Key-value store, which provides an HTTP protocol-based CRUD operation interface externally. The business logic of L can be abstracted into the following 3 steps:

    1. do: Business processing according to Keyvalueset, Get the dataset you need to update Keyvalueset ' {k1′:v1′, ... km ': VM '} ( Note : Read keyset and updated keyset ' may differ)
    2. update: Update Keyvalueset ' to D ( Note : D guarantees the atomicity of multiple keys at one call)

In case there is no transaction support. Multiple L concurrent processing can lead to data consistency issues. Example. Consider L1 and L2 for example the following run order:

    1. L2 read from d the corresponding key:123
    2. l2 will key : 123 Update to + 2

Assuming that the L1 and L2 are running serially, the key:123 corresponding value will be 103, but the running effect of L1 in the concurrency run is completely covered by L2, and the corresponding value of the actual key:123 becomes 102.

Workaround 1: Lock-based transactions

In order for the processing of l to have serializable characteristics (serializability), one of the most straightforward solutions is to consider adding a simple lock-based transaction for D. Let L lock D before doing business processing and release the lock after completion. Additionally, in order to prevent the holding of the lock L for some reason the transaction has not been committed for a long time. D also needs to have a timeout mechanism, and when l try to commit a transaction that has timed out, it gets an error response.

The advantage of this scenario is that it is simple to implement and the disadvantage is that the entire data set is locked. The granularity is too large; time includes the whole processing time of L, the span is too long. Although we can consider reducing the lock granularity to the data item level and locking by key, this can lead to other problems. Because the updated keyset ' may be uncertain beforehand, it may not be possible to lock all keys at the start of a transaction. Consider a phased process to lock in the required key. There may also be a deadlock (Deadlock) problem. In addition, pressing key to lock in the case of lock contention does not solve the problem of too long locking time.

Therefore, there are still important deficiencies to lock by key.

Workaround 2: Multiple version number concurrency control

In order to achieve serialization. At the same time avoid various problems of lock mechanism. We are able to adopt concurrency control based on multiple version numbers (multiversion concurrency controls. MVCC) The idea of a lock-free transaction mechanism. People generally call the concurrency control mechanism based on lock as pessimistic mechanism, and MVCC mechanism is called optimistic mechanism. This is because the locking mechanism is a preventative, read-blocking write. Writing can also block reading when the locking granularity is large. Concurrency is not so good when time is long, and MVCC is a post-mortem, read-not-jam write. Not to write or jam to read. Do not check for conflicts until they are submitted. Because there is no lock, read/write does not clog each other, which greatly improves the concurrency performance. We can use the source version number control to understand MVCC. Each person is free to read and change the local code, does not block each other, only at the time of submission version number Controller will check the conflict, and prompted the merge.

For now, Oracle, PostgreSQL, and MySQL all support MVCC-based concurrency mechanisms, but the detailed implementations are different.

A simple implementation of MVCC is a conditional update (Conditional Update) based on the CAS (compare-and-swap) idea. The normal update reference only includes a Keyvalueset ', Conditional Update adds a set of update criteria Conditionset {... data[keyx]=valuex, ...}, This is to update the data to Keyvalueset ' only if D satisfies the update condition. Otherwise. Returns an error message. In this way, l forms the processing mode of the Try/conditional update/(Try again), for example, as seen:

Although there is no guarantee that a single l will update successfully every time, there are always tasks that can be performed smoothly from the perspective of the system. Such a scheme uses conditional update to avoid large-grained and long-time locks. Concurrency is very good when resource contention between the various businesses is small. Just because conditional update requires a lot of other parameters. Assuming that the length of value in condition is very long, the amount of data transferred per network is relatively large, resulting in degraded performance.

Especially when the need to update the Keyvalueset ' is very small. And when the condition is very big. It is very not economical.

To avoid the performance problems caused by condition too large. The ability to add an int version number field for each data item, maintain the version number by D, add a version number each time the data is updated, and replace the detailed value with the version number at the time of conditional update.

Another problem is that the above approach assumes that D is capable of supporting conditional update. So. What if D is a third-party Key-value store that does not support conditional update? At this point, we can add a p as a proxy between L and D, and all crud operations must go through p. Let p do the condition check. and the actual data operations are placed in D. This approach enables the separation of condition checking and data manipulation. But at the same time reducing performance, you need to add the cache to p to improve performance. Because P is the only client of D, the cache management of P is very easy and does not have to worry about caching failures like multi-client scenarios. Just, in fact, as far as I know, both Redis and Amazon SimpleDB have support for conditional update.

Pessimistic lock and MVCC control

The basic principles of pessimistic locking and MVCC are described above, but they are suitable for each occasion. The detailed table of the two mechanisms under different circumstances is not very clear where it is now.

Here I have a simple analysis of some typical application scenarios. It is important to note that the following analysis is not distributed. Pessimistic locking and MVCC two mechanisms exist in distributed systems, single-database systems, and even memory variables at all levels.

# # # scenario 1: High response speed for read

One type of system update is particularly frequent. And the response speed of reading is very high, such as stock trading system.

Under the pessimistic locking mechanism. Writing will clog the reading. The response speed of the read operation is affected when there is a write operation, and the MVCC does not have a read-write lock. Read operations are not subject to any blockage, so the read response speed is faster and more stable.

# # # Scenario 2: Read far more than write

For many systems, the ratio of read operations tends to be much larger than write operations, especially in the case of some massive concurrent read systems. Under the pessimistic locking mechanism, when there are write operations occupy the lock, there will be a large number of read operations are blocked, affecting the concurrency performance, while the MVCC can maintain a relatively high and stable read concurrency capability.

# # # Scenario 3: Write operations conflict frequently

Assuming that the ratio of write operations in the system is very high and conflicts are frequent, careful evaluation is required. Assuming that two conflicting business L1 and L2 are running separately, they are time-consuming t1,t2. Under the pessimistic locking mechanism, their total time is approximately equal to the time of the serial run:

T = t1 + T2

Under MVCC, assume L1 is updated before L2. L2 need retry once. Their total time is approximately equal to the time that L2 runs two times (assuming that the two runs of L2 are of equal duration. Better yet, assuming that the next part of the valid results can be cached for the 1th time, the second run L2 time is likely to be reduced):

T ' = 2 * t2

The key is to evaluate the cost of the retry, assuming that the cost of retry is very low, for example, incrementing a counter. Or the second run can be much faster than the first time, when using the MVCC mechanism is more suitable. Instead. Suppose that the cost of retry is very large, for example, the report statistics operations need to calculate a few hours or even a day that should adopt the locking mechanism to avoid retry.

From the above analysis. We can simply conclude that the response speed and concurrency requirements for reading are better suited for MVCC, while the more expensive scenes of retry are more suitable for pessimistic locking mechanisms.

Summarize

This paper introduces a method of conditional update to solve the concurrency control problem of distributed system based on the multi-version concurrency control (MVCC) idea. Compared with pessimistic locking method, this method avoids large-granularity and long-time locking, and can better adapt to the high response speed and concurrency requirement of reading.

References
  • Wikipedia–serializability
  • Wikipedia–compare-and-swap
  • Wikipedia–multiversion concurrency control
  • Lock-free algorithms:the try/commit/(try again) pattern
  • Amazon SimpleDB faqs–does Amazon SimpleDB support transactions?

  • Redis–transactions
  • A Quick Survey of multiversion Concurrency algorithms
  • The use of non-clogging algorithm in the development of relational database application

Application of multi-version concurrency control (MVCC) in real project

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.