Second-kill system: Concurrent Queue Interface design Concurrent request data security processing

Source: Internet
Author: User

Look at the seconds to kill the system when you see the introduction of the concurrent queue, excerpt as follows

    • Selection of concurrent queues

Java's concurrent packages provide three common implementations of concurrent queues: Arrayblockingqueue, Concurrentlinkedqueue, and Linkedblockingqueue.

Arrayblockingqueue is 初始容量固定的阻塞队列 that we can use it as a queue for successful auction of database modules, such as 10 items, then we set a 10-size array queue.

Concurrentlinkedqueue use is CAS原语无锁队列实现,是一个异步队列 , the queue speed quickly , out of the team to lock, performance slightly slower.

Linkedblockingqueue also 阻塞的队列,入队和出队都用了加锁 , threads are temporarily blocked when the team is empty.

In the request preprocessing phase, due to our 入队需求要远大于出队需求 system, there is generally no team empty situation, so we can choose Concurrentlinkedqueue as our request queue implementation

1. Reasonable design of the request interface

A second kill or snapping up the page, usually divided into 2 parts, one is static HTML and other content, the other is to participate in the second Kill Web background request interface.

Usually static HTML and other content, is through the deployment of CDN, general pressure is not big, the core bottleneck is actually in the background request interface. This backend interface must be able to support high concurrent requests, and, at the same time, it is important to be as "fast" as possible to return the user's request results in the shortest amount of time. To achieve this as quickly as possible, the backend storage of the interface uses memory-level operations to be a little better. Storage that is still directly oriented to MySQL is inappropriate, and asynchronous writes are recommended if there is a need for this complex business.

Of course, there are some seconds to kill and snapped up using "lag feedback", that is, the second kill now do not know the results, a period of time before you can see from the page users whether the second kill success. However, this is "lazy" behavior, but also to the user's experience is not good, easy to be considered by users as "black-box operation."

Data security under high concurrency

We know that when multithreading writes to the same file, there is a "thread-safe" problem (multiple threads running the same piece of code at the same time, if the results of each run are the same as the result of a single-threaded run, the result is thread-safe as expected). If it is a MySQL database, you can use its own lock mechanism to solve the problem well, but in large-scale concurrent scenarios, it is not recommended to use MySQL. Second kill and snapped in the scene, there is another problem, is the "super hair", if the control inadvertently in this area, will produce too much to send the situation. We have also heard that some e-commerce buying activities, buyers after the success of the film, the merchant did not admit that the order is valid, refused to ship. The problem here, perhaps not necessarily is the merchant treacherous, but the system of technical aspects of the risk caused by the ultra-fat.

1. Causes of Super Hair

Let's say we have only 100 items in a snapping scene, and at the last minute we've consumed 99 items and only the last one left. At this time, the system sent a number of concurrent requests, this batch of requests read the product margin is 99, and then all passed this margin judgment, eventually lead to super-fat. (in the same scenario as in the previous article)

In the above diagram, it led to concurrent User B also "snapping up success", more people get a product. This scenario is very easy to appear in high concurrency situations.

2. Pessimistic locking ideas

There are many ways to solve thread safety, which can be discussed in the direction of pessimistic locking.

Pessimistic lock, that is, when modifying the data, the use of locking state, the exclusion of external request modification. When a lock is encountered, it must wait.

While the above scenario does solve the problem of thread safety, don't forget that our scenario is "high concurrency." That is, there will be a lot of such modification requests, each of which needs to wait for a "lock", and some threads may never get a chance to grab the "lock", and the request will die there. At the same time, this kind of request will be many, the average response time of the system increases, the result is that the number of available connections is exhausted and the system falls into an anomaly.

3. FIFO Queue ideas

Well, then we'll just change the scene a little bit, and we'll put the request directly into the queue, using the FIFO (first Input, Output, FIFO), so we don't cause some requests to never get locks. See here, is not a bit forced to turn multithreading into a single-threaded feeling ha.

Then, we now solve the lock problem, all requests to use "FIFO" queue mode to handle. So the new problem comes, high concurrency scenario, because the request is many, it is likely that the queue memory "burst" in a flash, and then the system fell into an abnormal state. Or designing a huge memory queue is also a scenario, but the speed at which the system is processing a request within a queue simply cannot be compared to the number of crazy influx queues. In other words, the more accumulated the requests in the queue, the worse the average response time of the web system is, and the system is stuck in an exception.

4. Optimistic Locking ideas

At this time, we can discuss the idea of "optimistic lock". Optimistic lock, is relative to the "pessimistic lock" with a more relaxed locking mechanism, mostly with version number (versions) update. The implementation is that this data all requests are eligible to modify, but will get a version number of the data, only the version number of the match can be updated successfully, the other return snapping failed. In this case, we don't need to think about the queue, but it will increase the CPU's computational overhead. However, in general, this is a better solution.

There are many software and services that are supported by the "optimistic lock" feature, such as Watch in Redis, which is one of them. Through this implementation, we guarantee the security of the data.

Second-kill system: Concurrent Queue Interface design Concurrent request data security processing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.