The product price increases with the second kill, for example, the first two hundred, the second three hundred, and the third four hundred, at the same time, I used the row lock to prevent the problem of overselling when updating other tables, such as modifying inventory, during insertion, the unique index is used to ensure data accuracy, but the problem is seconds... the product price increases with the second kill, for example, two hundred of the first buy, three hundred of the second buy, four hundred of the third buy, and so on.
During the update process, I used the row lock to prevent overselling and other problems. During the insertion process, I used the unique index to ensure data accuracy, but the problem was that the seckilling success rate was low.
How can I ensure data accuracy and integrity and relatively high processing efficiency when inserting or updating data? Because I have never done similar applications before, so I hope you can give me some ideas.
Reply content:
The product price increases with the second kill, for example, two hundred of the first buy, three hundred of the second buy, four hundred of the third buy, and so on.
During the update process, I used the row lock to prevent overselling and other problems. During the insertion process, I used the unique index to ensure data accuracy, but the problem was that the seckilling success rate was low.
How can I ensure data accuracy and integrity and relatively high processing efficiency when inserting or updating data? Because I have never done similar applications before, so I hope you can give me some ideas.
Second kill has only one problem since ancient times:
Whether the machine and program performance can support the second kill business.
If you can support it, you can just play it. If you cannot support it, you can use various [optimizations ]:
1. Before the second kill, register the users involved in the second kill.
The purpose of registration is to filter out the users involved in the second kill. Generally, this quantity is relatively small. Putting their data in a dedicated table can improve the retrieval speed.
2. Use a high-speed queue written in C or C ++ to decompress the backend database and ensure data correctness.
3. Use the time-sharing algorithm, that is, within five seconds, the database writes data to all users in the form of logs. In this case, the database cannot perform any other operations, such as query results. 5 seconds later, close the write operation. At this time, the log is de-duplicated, validity judgment, and other operations. In this process, the database cannot perform any other operations, such as writing the second kill log. After these operations are completed, return the second kill result to the client. Then, write the second kill log in 5 seconds. This method is especially effective when the number of participants is much larger than the number of commodities.
The second kill process can be modified as follows:
1. Insert only the second kill data and stop the second kill after a certain period of time (there may also be more records than the actual inventory ). This process only tells the user that the submission is successful, but does not tell the user the accurate result;
2. There is a background process that judges whether the flash sales is successful and the corresponding price based on the flash sales record and inventory, and updates the inventory. After the process is completed, you can view the final result in the second kill record.
The advantage of doing so is that it will not cause concurrency issues, ensure data accuracy, and the user can get the results of the second kill in a short period of time.