Problem:
In the app there are points system, the user will send a request to the server to record the user's check-in information, but the app convulsions will send more than one request, the result is that although there is verification check, However, multiple requests pass through this verification (multiple requests have not yet written the check-in information to the database), resulting in the same two occurrences of the same record in the database. How to solve this problem
Current Solutions:
select ... for update
The way to lock the database can be solved, but this throws the problem to the database, how to solve such a problem?
Supplementary questions
Use caching (Redis) to mask the same request, but it cannot be completely resolved:
//屏蔽同一请求// 请求数据生成key$forbidsameRequest = md5(implode('',$reqDatas));if (Cache::has($forbidsameRequest)){ Log::info("the same request", $reqDatas); return Util::returnError();}Cache::put($forbidsameRequest, true, $this->_forbiddenTime);省略逻辑处理...// 处理后释放keyCache::forget($forbidsameRequest);
Reply content:
Problem:
In the app there are points system, the user will send a request to the server to record the user's check-in information, but the app convulsions will send more than one request, the result is that although there is verification check, However, multiple requests pass through this verification (multiple requests have not yet written the check-in information to the database), resulting in the same two occurrences of the same record in the database. How to solve this problem
Current Solutions:
select ... for update
The way to lock the database can be solved, but this throws the problem to the database, how to solve such a problem?
Supplementary questions
Use caching (Redis) to mask the same request, but it cannot be completely resolved:
//屏蔽同一请求// 请求数据生成key$forbidsameRequest = md5(implode('',$reqDatas));if (Cache::has($forbidsameRequest)){ Log::info("the same request", $reqDatas); return Util::returnError();}Cache::put($forbidsameRequest, true, $this->_forbiddenTime);省略逻辑处理...// 处理后释放keyCache::forget($forbidsameRequest);
This is a typical anti-replay attack scenario.
Requires that the request side take a random string state (which can also be generated by a particular rule, or even requested from the server), that the service side (with implementations such as filters/interceptors do not affect the business code) is cached for a certain amount of time (long-sighted business and hardware), Each request checks whether the state value exists in the cache (or whether it conforms to the rule, or whether it is generated by the server), and if there is a discard or a special response, the first accepted request is handled as normal.
It is important to be aware that it is an atomic operation to determine and cache this.
Curious, why do you have to re-send two times, try to do a simple verification in the app?
Check-in information can be recorded in the Redis cache, and the validated logic uses data from the Redis cache, and then asynchronously writes the check-in data to the database
What are the fields in your data sheet that record check-ins? If it is similar to a user ID (UID) and a similar timestamp timestamp, you can set these two fields as unique key.
In this case, you can control the concurrency of a single IP by using the Nginx leaky bucket limit_req_zone
as a speed limit setting limit_req
. For example, you can set up: For each IP only one request per second, the maximum concurrency is 5, then a second to send the concurrent 5 requests, there will be 1 are processed, The other 4 go into a delay process.
Or all requests are placed in a single queue and processed sequentially