now in the e-commerce industry, the
second-kill snapping activity has been a popular means of business promotion. However, the inventory quantity is limited, while the
number of orders exceeds the inventory, it will lead to the problem of goods oversold or even the stock becomes negative.
Another example: snapping up train tickets, forum robbery, sweepstakes and even red Weibo comments will also lead to high-blocking concurrency problems. How do you solve this problem if you don't do anything that could cause the server to crash in the high-instantaneous moment?
Here are some ideas that the individual thinks are more feasible:
Scenario One: Using message
queuing to implement
can be based on such as MEMCACHEQ, such as message
queuing , the specific implementation of this statement.
For example, there are 100 tickets available for users to Rob, then you can put the 100 tickets in the cache, read and write do not lock. When the concurrency is large, there may be 500 people to rob the ticket successfully, so for 500 after the request can go directly to the end of the active static page. 500 out of 400 people are not likely to get a product. Therefore can be based on the order of entry into the
queue only the first 100 people to purchase success. The next 400 people go directly to the end of the activity page. Of course, 500 people just give an example, as to how much can be adjusted. The end of the activity page must use static pages, do not use the database. This reduces the pressure on the database.
Scenario Two: When there are more than one server, you can use the form of shunt to achieve
Suppose there are m tickets, there are N products server receiving requests, there are x request routing server random forwarding
assign m/n tickets directly to each product server
each product server memory makes
counters , such as allowing m/n* (1+0.1) individuals to come in.
when the memory
counter is full:
the person in the back, jumps directly to the static page that goes to the end of the activity,
notifies the routing server that it is not routed to this server (this is worth discussing).
all product servers come in m/n* (1+0.1) Individual and then all forwarded to a payment server, into the payment link, see who deft on hand, at this time less people, plus lock what is simple.
scenario Three, if it is a single server, you can use the Memcache lock to achieve
Product_key key for the ticket
Product_lock_key for ticket lock key
when Product_key is present in memcached, all users can enter the order process.
when entering the payment process, first store add (Product_lock_key, "1″") to Memcached,
If the return succeeds, enter the payment process.
if not, it means that someone has entered the payment process, and the thread waits for n seconds and executes the add operation recursively.
Scheme IV, with the help of file exclusive lock
when processing the order request, lock a file with flock, if the lock failure indicates that there are other orders being processed, then either wait or prompt the user "server Busy"
This article is to say the 4th scenario, the approximate code is as follows
block (Wait) mode:
non-blocking mode:
The above on the introduction of PHP to solve the snapping, second kill, Rob Lou, lottery and other blocking high-concurrency inventory prevention and control over-the-counter thinking method, including the queue, second kill, the number of counters in the content, I hope that the PHP tutorial interested friends helpful.