My Iot Project (14th) distributed transactions and online project transactions

Source: Internet
Author: User
Tags database sharding

My Iot Project (14th) distributed transactions and online project transactions

2.0 the service-oriented architecture of the platform must be database sharding and database sharding must face a distributed transaction processing problem. Therefore, the design and coding workload is far greater than that of the 1.0 single application architecture. However, the focus of doing anything is not on implementation, but on thinking. Therefore, to solve the distributed transaction problem, you must first think clearly about how to do it multiple times, which is the focus.

In fact, there are a lot of Distributed Transaction processing methods, so you don't have to worry about finding a solution. The key is to find a method that suits your business scenario, my previous project involves several Distributed Transaction Processing Methods. What I remember clearly is a mobile recharge business. After a user completes online payment through a bank, then recharge your mobile phone number, which involves two consistent transactions: Bank recharge and mobile phone recharge. At the beginning, we used reconciliation every morning to handle this kind of problem, the solution is to upload an order file to an FTP server every early morning, And then notify the bank to take the statement file for reconciliation, and then implement different settlement methods. Of course, the specific problem still needs to be analyzed in detail. If our current business scenario is also simply adapted to use this method, will it be okay? No!

Our business scenario process: users use the APP to scan the QR code for the primary car and spend 1 RMB to calculate the total number of participants for each order.

1. the user account deducts 1 RMB.

2. The platform has two cents.

3. The city partner account is divided into 4 cents.

4. The merchant account is divided into 4 cents.

Note: As mentioned earlier in the 2.0 architecture system, database sharding is divided into user database, platform database, city partner database, and merchant database.

Therefore, in such business scenarios, we currently use the log (event)-based eventual consistency to implement distributed transactions. Of course, there are many knowledge points involved, including single database transactions, MQ message notifications, scheduled scheduling, asynchronous, and so on.

Before describing the log-based (event)-based eventual consistency in detail, we must first talk about flexible transactions and rigid transactions. In fact, these concepts are not that complex, the so-called rigid transactions refer to transactions that strictly follow the ACID principle. For example, database transactions in a standalone environment are the ones we previously implemented in spring transactions on the 1.0 platform. Flexible transactions are slightly more complex. They refer to transactions that follow the BASE theory. There are many common implementation methods: Two-phase commit (2 PC) and TCC compensatory commit, message-based Asynchronous guaranteed type, maximum effort notification type, etc.

I want to say so much: large transactions = small transactions (atomic transactions) + asynchronous (Message notifications ).

In this example, each database contains two event tables: eventPublish (event table to be released) and eventProcess (event table to be processed). The table is designed as follows and can be tailored to your business scenario.

Let's take a simple example. The APP scans the QR code to start the domestic car (involving two microservices ).

1. Reduce user funds (Database)

2. Merchants increase funds (Database B)

The idea of implementing eventual consistency based on logs (events) is as follows:

1. after receiving the user request, the user service starts the transaction to reduce the funds, and creates a record with status 1 (to be released) in the eventPublish table. The payload records the event content and submits the transaction.

2. in the user service, the timer scans, starts the transaction in the method, and queries whether eventPublish has a record with status 1. After the record is queried, the payload information is obtained, publish a message to a merchant through MQ (the merchant has MQ real-time monitoring ).

3. the merchant service listens to the user reduce fund event sent from MQ (first judge whether the database already exists, if yes, send a message to the user through MQ, and return the received message to the user end ), in the eventProcess table, create a record whose status is 1 (to be processed). The payload record records the event content. If the record is saved successfully, return the message successfully received to the client (which has the MQ real-time listener on the client. (The objective is to set eventPublish status to 2, which will not be scanned next time)

4. the user side monitors the received message and sets the eventPublish status to 2 (published). The next scan will not scan the message. Otherwise, messages will be sent continuously to the seller, tell the seller to increase funds.

5. the timer scan in the Merchant service starts the transaction in the method, and then queries whether eventProcess has a record whose status is 1 (to be processed). After the record is queried, the payload information is obtained, add funds to sellers for processing business. after successful processing, modify the status of eventProcess in the database to 2 (completed), and finally commit the transaction.

This is probably the way to achieve this. This processing method has a high network request throughput, which is basically segmented asynchronous processing.

 

Of course, our business scenario is more complicated than this one. A process involves more atomic libraries, but the general idea is similar. The overall design is as follows:

The entire process of our specific business conditions is still being optimized.

1. exception Disaster Tolerance processing. For example, when a segment is in the middle, the next step cannot be performed for various reasons, and it is continuously executed multiple times through MQ and scheduling. At this time, we will detect, if the attempt to retry multiple times is invalid, the system enters the second-level message queue and can be manually processed in the background.

2. the flexibility of the single-link mode is further enhanced. For example, the next segment of a user is not a seller, it may be a city partner or another user. In this case, the user can determine through the MQ transmission parameters, the arrival point of the next section.

In short, in the processing scenario based on the MQ distributed business, the bottleneck stability lies in the MQ itself, so this must ensure its continuous high availability and stability. I will continue to share the details of the problems encountered in the project in the future.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.