On the system architecture optimization of the second kill

Source: Internet
Author: User
Tags cas

First, the question of the proposed

Seconds to kill or snapping up activities generally will be through the appointment, the order, payment, can not carry the place lies in the next order, generally will bring 2 questions:

1, High concurrency

More fiery seconds. The number of online people is 10w, so high online people are a test of the site architecture from the front to the back.

2. Oversold

Any commodity will have a limit, how to avoid the success of the order to buy goods in the number of people does not exceed the limit of the number of goods, this is the problem of every snapping up activities.

Seconds to kill the system is difficult to do: only one copy of inventory, a large number of users read and write this data.

For example, Xiaomi mobile phone every Tuesday seconds kill, may have only 10,000 mobile phones, but instantaneous access to the flow may be hundreds of tens of millions of

Second, the structure

Common site schemas are as follows

1) browser-side, top-level, will be executed to some JS code

2) The site layer, which accesses back-end data, returns data to the browser

3) service layer, upstream shielding the underlying data details

4) data layer, the final inventory is present here

Third, the optimization of ideas

1, the request to intercept as far as possible upstream : traditional seconds to kill the system hangs, the request has overwhelmed the back-end data layer, the database read-write lock conflict is serious, resulting in slow response, the next single basic cannot succeed

2. using cache : This is a typical read-less application scenario and is ideal for using caching

Iv. Optimization of details

1. Browser Layer request interception

A) product level, the user click "Query" or "Purchase Ticket", the button gray, prohibit users to repeat the request

b) JS level, limit the user to submit only one request within X seconds

Many invalid requests can be intercepted

2. Site-level request blocking and page caching

Prevents the server from sending too many malicious HTTP requests directly

A) The same UID, limit the frequency of access, do the page cache, in x seconds to reach the site layer of the request, all return to the same page

b) The same item query, such as mobile phone, do the page cache, x seconds to reach the site layer of the request, all return to the same page

Many invalid requests can be intercepted

3. Service layer request interception and data cache

A) What is the point of giving too many requests to the database? For the write request, make the request queue, each time only through a limited write request to the data layer, if both successfully put down a batch, if the inventory is not enough then the queue of write requests all returned "sold out

b) for read requests ,cache to resist , with memcached or Redis (10WQPS)

With this current limit, only very few write requests, and very few requests to read the cache mis go through the data layer

4, the data layer leisurely stroll

To the level of data, there is almost no request, inventory is limited, through excessive requests to the database is meaningless

V. Solutions

About oversold, first set a premise, in order to prevent the phenomenon of oversold, all the reduction of inventory operations need to do a minus check, to ensure that the reduction is not equal to negative. (Because of the nature of MySQL transaction, this method can only reduce the amount of oversold, but it is not possible to completely avoid oversold)

Update number set x=x-1 where (x-1) >= 0;

Solution 1:

Move the repository from MySQL forward to Redis, all the writes into memory, because there are no locks in the Redis there is no waiting for each other, and because Redis writes performance and read performance are much higher than MySQL, this solves the high concurrency performance problem. The changed data is then asynchronously written to the DB via asynchronous means such as a queue.

Pros: Resolving Performance issues

Cons: There is no problem solving oversold issues, and because of the asynchronous write to DB, there is a risk of inconsistent data in DB and Redis at some point.

Solution 2:

The queue is introduced, and all write DB operations are queued in a single queue and fully serially processed. When the inventory threshold is reached, it is not in the consumption queue and the purchase function is turned off. This solves the problem of oversold.

Pros: Solve oversold problems and slightly improve performance.

Cons: Performance is limited by the processing performance of the queue processor and the shortest write performance of the DB, and many more items need to be queued at the same time.

Solution 3:

The write operation is moved forward to MC, and the lightweight locking mechanism CAs of MC is used to realize the reduction of inventory operation.

Advantages: Read and write in memory, operation performance is fast, the introduction of lightweight lock can ensure that only one write success at the same time, solve the problem of reducing inventory.

Cons: No measurement, CAS-based features do not know if there will be a large number of update failures under high concurrency? However, locking will certainly have an impact on concurrency performance.

Solution 4:

Turn the commit operation into a two-paragraph form, and then confirm the application. Then, using Redis's atomic self-increment operation (compared to MySQL's self-increment, there is no hole), and using the Redis transaction feature to make the number, ensure that the number of the value of less than equal to the inventory threshold can be successfully submitted orders. The data is then asynchronously updated into the DB.

Advantages: Solve the problem of oversold, inventory reading and writing are in memory, so at the same time to solve performance problems.

Disadvantage: Due to the asynchronous write to DB, there may be inconsistent data. There may be less buy, that is, if the person who gets the number does not actually place the order, the inventory may be reduced to 0, but the order number does not reach the inventory threshold.

Summarize

Expansion + current limit + memory cache + Queueing

On the system architecture optimization of the second kill

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.