Distributed transaction theory (ACID, CAP, BASE)

Source: Internet
Author: User
Tags advantage

1, the characteristics of distributed systems

A distributed system is a system in which hardware or software components are distributed across different networked computers, communicating and coordinating with each other only through message delivery. It has the following characteristics

  Distribution of

Multiple computers in a distributed system are randomly distributed in space, and the distribution of machines is subject to change at any time.

  equivalence of

The computer in the distributed system has no master/slave, neither the host that controls the whole system nor the slave that is controlled, all the computer nodes that make up the distributed system are equal, and the replicas refer to the redundancy of the distributed system to the data and the service, in order to provide high availability service to the outside, We tend to copy data and services. Data copy is to persist the same data on different nodes, when the data stored on a node is lost, the data can be read from the copy, which is the most effective means to solve the problem of data loss in distributed system. A service replica is the same service provided by only multiple nodes, each of which has the ability to accept requests from outside and handle them accordingly.

  Concurrency of

Multiple nodes in the same distributed system may operate concurrently with some shared resources, such as databases or distributed storage, how to efficiently coordinate distributed concurrency is one of the biggest challenges in Distributed system architecture and design.

  lack of global clocks

A typical distributed system consists of a series of randomly distributed processes in space, with obvious distributions, which communicate with each other by exchanging messages, so in a distributed system it is difficult to define two of the time who is who first, because the distributed system lacks a global clock sequence control.

  failures always occur

All the computers that make up the distributed system are likely to have any form of failure, and any anomalies considered during the design phase will certainly occur in the actual operation of the system.

2, the problem of distributed environment

  2.1 Communication Anomalies

From centralized to distributed, network factors are inevitably introduced, and because of the unreliability of the network itself, additional problems are introduced. The network communication between each node of the distributed system can be carried out normally, and its delay will be much greater than that of single machine operation, and the message loss and message delay become very common in the process of sending and receiving messages.

  2.2 Network Partitions

When the network abnormal situation, causing the network delay between some nodes in the distributed system is increasing, resulting in the distribution of all the nodes of the chest, only some of the nodes can communicate normally, while others can not, this phenomenon is called network partition, when the network partition appears, Distributed systems will be small clusters, in extreme cases, these small local clusters will be independent of the entire distributed system to complete the functions, including data transaction processing, which presents a very large challenge to distributed consistency.

  2.3 Tri-State

Because the network may have a variety of problems, so the distributed system every request and response, there is a unique three-state concept: Success, failure, timeout. When the network is in an unusual situation, a timeout may occur, usually by the following two scenarios: 1. Due to network reasons, the request was not successfully sent to the receiver, but the message loss occurred during the sending process. 2. The request was accepted by the receiving party and processed, but a message loss occurred when the response was fed back to the sender.

  2.4 node Failure

Node failure refers to the outage or zombie phenomenon in the server nodes that make up the distributed system, where each node is likely to fail and coal is occurring.

Theory of distributed Transactions


ACID refers to the four characteristics of transactions (transaction) in the database management system (DBMS): atomicity (atomicity), consistency (consistency), isolation (isolation, also known as independence), Persistence (durability).

In a database system, a transaction refers to a complete logical process consisting of a series of database operations. For example, bank transfer, deduction of the amount from the original account, and the addition of the amount to the target account, the sum of the two database operations, constitutes a complete logical process that cannot be split. This process is called a transaction and has an acid characteristic.

atomicity : All operations in a transaction (transaction) are either complete or not complete and do not end up in the middle of a session. When an error occurs during execution, the transaction is rolled back (Rollback) to the state before the transaction begins, as if the transaction had never been executed.

consistency : The integrity limit of the database is not compromised until the transaction begins and after the transaction has ended.

isolation : The correlation that occurs when two or more transactions concurrently access the same data in a database that refers to queries and modified operations. Transaction isolation is divided into different levels, including read UNCOMMITTED, Read Committed, REPEATABLE READ (repeatable Read), and serialization (Serializable).

Persistence : After the transaction completes, the transaction changes made to the database are persisted in the database and are complete.

Distributed transactions refer to the participants of a transaction, the server that supports the transaction, the resource server, and the transaction manager, respectively, on different nodes of the distributed system, typically involving operations on multiple data sources or business systems in a distributed transaction. A distributed transaction can be seen as consisting of multiple distributed sequences of operations, which can often be referred to as sub-transactions in a series of distributed sequences of operations. Because the execution of each sub-transaction is distributed in the distributed transaction, it is extremely complicated to implement a distributed transaction processing system which can guarantee the acid characteristic.


Consistency (consistency), availability (availability), partition tolerance (partitiontolerance) The CAP principle means that these three elements can achieve at most two points at the same time, it is impossible to balance the three. This was presented by Professor Brewer in 2000, and the descendants also demonstrated the correctness of the CAP theory.

L Consistency (consistency):

For distributed storage systems, there are often multiple copies of a data. To put it simply, consistency will allow the customer to modify the data (Add/delete/change), either in all copies of the data (replica), or all of them fail. That is, the modification operation is an atomic (atomic) operation for all copies of a piece of data (the entire system).

If a storage system can guarantee consistency, then the data read and written by the customer is guaranteed to be up to date. There will not be a case where two different clients read different replicas in different storage nodes.

l Availability (availability):

Usability is simple, as the name implies, when the client wants to access the data, it can get a response. Note, however, that system availability (Available) does not mean that the data provided by all nodes of the storage system is consistent. In this case, we still say that the system is available. Often we will set a maximum response time for different applications, and services that exceed this response time are still known as unusable.

l Zoning Tolerance (Partition tolerance):

If your storage system is running on only one node, either the whole system crashes or it all works fine. Once the storage system for the same service has been distributed across multiple nodes, there is a possibility of partitioning the entire storage system. For example, a network disconnection between two storage nodes, whether long or short, forms a partition. In general, in order to improve the quality of service, the same data placed in different cities is very normal. Therefore, it is also normal for the nodes to form partitions.

Gilbert and Lynch define partition tolerance as follows: Noset of failures less than total network failure are allowed to cause the Systemto respond Incorr ectly. In addition to all network node failures, the failure of all child node collections does not allow the entire system to respond incorrectly. An additional article (Base:an Acid Alternative) explains the tolerance of partitioning: Operationswill complete, even if individual components is unavailable. Even if some of the components are not available, the applied action can be completed.

For a large-scale distributed data system, the CAP three elements are not available, the same system can only achieve two of them, but must relax the 3rd element to ensure that the other two elements are satisfied. Generally in the network environment, the running environment network partition is unavoidable, so the system must have partition tolerance (P) characteristics, so in general in this scenario design large-scale distributed systems, often in the AP and CP in the trade-offs and choices.

Why the CAP three in distributed environment can not be combined.

Since the above mentioned for the distributed environment, p is necessary, so the problem can be converted to: if P has been obtained, then C and a can be both. Can be divided into two situations to deduce:

(1) If the data in this distributed system does not have a copy, then the system must meet the strong consistency conditions, because only the unique data, there is no data inconsistency problem, at this time C and P are available. However, if some services
The machine goes down, it will inevitably cause some data to be inaccessible, and that a will not fit.

(2) If the data in this distributed system is a replica, then if some servers are down, the system can still provide services, that is, a. But it's hard to keep the data consistent, because when you are down, you may
Some data has not been copied to the copy, so the data provided in the copy is inaccurate.

Therefore, in general, according to the specific business to focus on C or a, for the higher consistency requirements of the business, then the access delay time requirements will be low, for the access latency requirements of the business, then the data consistency requirements will be low. The consistency model can be divided into the following categories: Strong consistency, weak consistency, eventual consistency, causal consistency, read-write consistency, session consistency, monotonic read consistency, and monotonic write consistency, so you need to choose the right consistency model for different businesses.


The theoretical support for final consistency is the base model, the base full name is basicallyavailable (basic available), soft-state (soft state/flexible transaction), eventually consistent (final consistency). The base model is logically contrary to the concept of acid (atomicity atomicity, conformance consistency, isolation isolation, persistence durability) models, which sacrifice high consistency to gain availability and zoning tolerance.

Base is a shorthand for the basic available (basically Available), Soft State (weak states), eventually consistent (final consistency) three phrases. ① is basically available, which means that the distributed system allows loss of some availability, such as loss of response time or loss of functionality, in the event of an unpredictable failure.

② weak state, also known as the soft state, means that the two data in the system is allowed to exist in the middle State, and that the existence of the intermediate State does not affect the overall availability of the system, that is, allowing the system to synchronize data between the different nodes of the process of synchronization delay.

③ final consistency, refers to all copies of the data in the system, after a period of time after the synchronization, finally can achieve a consistent state, so the essence of final consistency is the need to ensure that the system data can be consistent, without the need for real-time guarantee the strong consistency of the system data.

Data Replication

Data replication is a category of distributed computing, it is not confined to the database, but mainly refers to the replication of distributed database.

In the distributed database system composed of multiple replicas, the differences between the transaction characteristics and the single database system are mainly manifested in two aspects of atomicity and consistency. In terms of atomicity, all operations that require the same distributed transaction are either committed on all relevant replicas or rolled back, that is, in addition to guaranteeing the atomicity of the original local transaction, the atomicity of the global transaction needs to be controlled, and in terms of consistency, a single copy consistency is required between multiple replicas.

After nearly 20 years of research, a variety of replication protocols have been proposed for the core problems of the two replication protocols, which are the atomicity and consistency of distributed transactions. These protocols have significant differences in both external functions and internal implementations. Accordingly, we can classify these two big aspects.

From the perspective of external function, according to the literature, the location and time of transaction execution can be classified from two aspects. The place from which transactions are executed can be divided into two categories: Master-slave (priamry/copy) mode and update all (update-anywhere) mode.

The process of the former is usually to specify only one primary node in the system to accept the update request, after the transaction operation is completed, the operation is broadcast to other copy nodes before or after the transaction commits.

The latter is slightly more complex to process, and any replica in the system has the same status, and can receive update requests to propagate the update of individual nodes to other replica nodes before detecting transaction conflicts, transaction commits, or later.

Primary/copy mode concurrency control is relatively simple, by the Primary local transaction control can be implemented, the atomic implementation of the transaction is relatively simple, generally by the Primary node as a coordination node to achieve. However, the flaw is also obvious: only a single node provides update request processing power, and for update-intensive types of applications, such as OLTP, it is easy to create a single-point performance bottleneck. The Update-anywhere method is complementary to each other, which can increase the throughput rate of the transaction through multipoint, but the complicated concurrency control and atomicity problem between multiple distributed transactions is followed.

From the point of view of a transaction submission, it can be divided into positive (Eager) and negative (Lazy) two categories. The difference is that the former propagates the update before the transaction commits, and the latter propagates the transaction operations to the other replicas after committing. In fact, the former is usually meaningless synchronous replication (synchronous replication), which is meaningless asynchronous replication (asynchronous replication).

The advantage of asynchronous replication is that it can improve responsiveness, but at the expense of consistency, the algorithms that implement such protocols generally require additional compensation mechanisms. The advantage of synchronous replication is that it guarantees consistency (typically through a two-phase commit protocol), but with greater overhead and poor usability (see the CAP section), which leads to more conflicts and deadlocks. It is worth mentioning that the Lazy+primary/copy replication protocol is very practical in the actual production environment, and MySQL replication actually belongs to this.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.