1000 concurrent 3 prizes ensure that the second kill is stable, and the number of winners and prizes should not be affected. mysql hopes that experienced students will share their experiences and solutions. Thank you, wait for 1000 concurrent three prizes to ensure a stable second kill, and the number of winners and prizes should not go wrong
How can this problem be solved? I hope you will share your experience and solutions with me. Thank you.
Reply content:
1000 concurrent 3 prizes ensure that the second kill is stable, and the number of winners and prizes should not go wrong
How can this problem be solved? I hope you will share your experience and solutions with me. Thank you.
Redis queue
Redis can cope with high concurrency scenarios because its read/write requests are all single-threaded and there is no concurrency problem. Besides, Redis is based on memory, and the read/write speed is much faster than that of MySQL.
Because there are only three prizes, no matter how many requests come, you can only have up to three requests to get the prize correctly. Therefore, you can not process most requests and directly return a second kill failure, when a small number of requests are left in the next step, the concurrency is very small, so the subsequent processing can almost ignore the concurrency issue.
Of course, this scheme is only because the number of prizes is small. If there are many prizes, even if most of the requests are discarded, the system will suffer a large number of concurrent attacks. In this case, you need to consider other schemes.
Please use the cache to add all the prizes to it. Then, the database will be updated only in seconds after it is all in the cache.
It seems that your second kill scenario is a little simpler than ours. If you do not consider very rigorous scalability, just consider the present, I think it can be simply implemented as follows:
Assume that you have optimized mysql or upgraded the hardware, and you have 1000 concurrent queries to check whether there is any inventory in the mysql database. If you can survive, you only need one queue service, all concurrent second kill Users enter the queue one by one to synchronously filter out three winning users one by one, and then deduct the inventory in mysql when the order is generated, the system also responds to the front-end page to check whether the second kill is successful. All other 997 users respond to the front-end process's second kill failure reminder.
What if I cannot afford to upgrade? Use services such as redis to store 1000 prize information in the cache. When the second kill is used to generate an order, it will synchronously update the inventory quantity in mysql and redis.
Because the subject declares that only MySQL is used, I think this can be done:
1. Add the three prizes to a table;
2. The system enters the high concurrency state after the arrival, and then your random prize algorithm allows the user to win and delete the record;
3. Check whether data exists in the table;
You can also do this:
1. Create a special counter table for winning prizes;
2. The system enters the high concurrency state after the arrival, and then your random prize algorithm allows the user to win and increase the count;
As for the optimization scheme (something you can think of for the time being, I hope you can add it ):
1. Storage engine usageMEMORY
;
2. You can set an expiration time at the program entrance. If there are 3 prizes, it will be enough for 10 seconds. After expiration, the entrance will be closed or redirected;
The simplest way is to pull 1000 concurrent requests to a queue in a linear way. When a request is found to have no inventory, all subsequent requests will return a second kill failure.
Only 1000 of the concurrency/data, why use the database. The database speed is still very slow.
The server directly saves a timestamp for each request and sorts the timestamp. The user will not feel anything special after 1-2 seconds.
Save the request to the queue. Only the first three values of the queue are returned.
A queue service is required. All the concurrent users are killed in seconds. All the users are added to the queue and filtered out three winning users one by one. Then, the inventory in mysql is deducted when an order is generated, the system also responds to the front-end page to check whether the second kill is successful. All other 997 users respond to the front-end process's second kill failure reminder.
Use the mysql memory table. Some requests are randomly dropped using methods such as hash user_id.
Then update the memory table directly, for example:
''
Update award set user_id = xxx where id = 1 and user_id = 0
''