Seconds Kill System

Source: Internet
Author: User

Original source

Http://www.blogjava.net/stevenjohn/archive/2015/05/23/425248.html

First, why difficult

The reason why the second kill system is difficult: only one copy of the inventory, everyone will read and write the data in a concentrated time.

For example, Xiaomi mobile phone every Tuesday seconds kill, may have only 10,000 mobile phones, but instantaneous access to the flow may be hundreds of tens of millions of.

Another example is the 12306 Rob, also similar to the second kill, instantaneous traffic is even more.

Ii. Common architectures


Traffic to hundreds of millions of levels, common site architecture as above:

1) browser-side, top-level, will be executed to some JS code

2) site layer, this layer will access the backend data, the spelling HTML page back to the browser

3) service layer, upstream shielding the underlying data details

4) data layer, the final inventory is present here, MySQL is a typical

Third, the direction of optimization

1) to intercept the request as far as possible in the system upstream : The traditional second kill system hangs, the request has overwhelmed the back-end data layer, the data read and write lock conflict is serious, concurrency high response slow, almost all requests are timed out, the traffic is large, the effective flow of the next single success is very small "a train actually only 2000 tickets, 200w individual to buy, basically no one can buy success, request effective rate is 0 "

2) make full use of the cache: this is a typical read more less application scenario "a train actually only 2000 tickets, 200w individuals to buy, up to 2000 people to order success, others are query inventory, write proportion only 0.1%, read proportion accounted for 99.9%", Ideal for use with caching

Iv. Optimization of details

4.1) browser Layer request interception

Click on the "Query" button, the system that card Ah, progress bar up slow ah, as a user, will not consciously click on the "query", continue to point, continue to point, point point ... Is it useful? The system load is increased by no means (a user point 5 times, 80% requests are so many out), how the whole?

A) product level, the user click "Query" or "Purchase Ticket", the button gray, prohibit users to repeat the request

b)JS level, limit the user to submit only one request within X seconds

This current limit, 80% of the flow has been stopped

4.2) Site-level request blocking and page caching

Browser layer request interception, can only stop small white users (but this is 99% users yo), high-end programmers do not eat this set, write a for loop, directly call your backend HTTP request, how to complete?

A) The same UID, limit the frequency of access, do the page cache , in x seconds to reach the site layer of the request, all return to the same page

b) the same item query, such as mobile phone trips, do page cache , x seconds to reach the site layer of the request, all return to the same page

With this current limit, 99% of traffic will be intercepted at the site level.

4.3) service layer request interception and data cache

Site-level request interception, can only stop ordinary programmers, senior hackers, assuming he controls the 10w broiler (and assume that tickets do not need real-name authentication), this is not the limit of the UID? What's the whole?

A) brother, I am a service layer, I know that Xiaomi only 10,000 mobile phones, I know that a train only 2000 tickets, I 10w a request to go to the database what is the point? for the write request, make the request queue, each time only through a limited write request to the data layer , if both successfully put down a batch, if the inventory is not enough then the queue of write requests all returned "sold out"

b) for read requests , still use to say? cache to resist , whether it is memcached or Redis, stand-alone 10w per second should be no problem

With this current limit, only very few write requests, and very few requests to read the cache mis go through the data layer, and 99.9% of requests are stopped.

4.4) Data Layer Stroll

To the level of data, there is almost no request, stand alone can carry, or that sentence, inventory is limited, millet production capacity is limited, through excessive requests to the database is meaningless.

V. Summary

Nothing to summarize, the above should be described very clearly, for the second kill system, repeat the author's two architecture optimization ideas:

1) try to intercept the request on the upstream of the system

2) Read and write less common use of more cache

Seconds Kill System

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.