Suppose I have stored such a HASH {code...} In Redis ...} then let's assume that there are two concurrent operations in php, all of which are HGET json, and json_decode into php array. then, perform operations on a value. json_encode and then HSET back. for example, one operation sets isAlive to false, and the other... suppose I have stored such a HASH in Redis.
HSET player Mike "{\"height\":180,\"isAlive\":true}"
Then let's assume that there are two concurrent operations in php, all of which are HGET json, and json_decode into php array.
Then perform operations on a value. json_encode and then HSET back.
For example
Set isAlive to false in one operation,
Another operation is to change height to 181,
This is likely to happen unexpectedly-the second operation was performed before the previous operation HSET the data to the database and HGET the data.
Oh. This is terrible.
The previous operation has killed the person, and the last operation has revived the person ..
Then, imagine more concurrent operations more VALUE...
Is there any good way to ensure the atomicity of the JSON section?
Reply content:
Suppose I have stored such a HASH in Redis.
HSET player Mike "{\"height\":180,\"isAlive\":true}"
Then let's assume that there are two concurrent operations in php, all of which are HGET json, and json_decode into php array.
Then perform operations on a value. json_encode and then HSET back.
For example
Set isAlive to false in one operation,
Another operation is to change height to 181,
This is likely to happen unexpectedly-the second operation was performed before the previous operation HSET the data to the database and HGET the data.
Oh. This is terrible.
The previous operation has killed the person, and the last operation has revived the person ..
Then, imagine more concurrent operations more VALUE...
Is there any good way to ensure the atomicity of the JSON section?
The problem you mentioned exists. The problem you mentioned here is actually the General read-Modify-write. redis can ensure the atomicity of each operation, however, the atomicity of multiple operations cannot be guaranteed. The solution can use the multi and watch commands provided by redis. The specific usage is as follows:
1. watch the key you want to read
2. Enable multi transaction
3. Read key content
4. Modify value content
5. Update key content
6.exe c commits a transaction. If the key value changes between 2 and 6, an error is returned.
The above is only the maximum atomic operation that redis can provide, but it is not helpful for your problem, because the commands executed in the transaction can obtain the return value only after the transaction is committed, however, your changes need to be performed based on the results of the previous step. if the transaction is not committed, you will not be able to get the original content, so you will not be able to update it.
The solution is to save the json format string using the redis hash structure, and directly update a key under the hash during the update, so that no concurrency occurs, however, if you still need to make a judgment based on the original value and then modify it, the problem is still the same as mentioned above, and there is no change.
Redis is a single-threaded model. Don't worry about this problem.
The hget and hset operations can be controlled by transactions.