Suppose I had such a hash stored in Redis
HSET player Mike "{\"height\":180,\"isAlive\":true}"
Then assume that PHP has 2 concurrent operations, are hget this paragraph json,json_decode into a PHP array.
It then operates on one of the value. Json_encode and then hset back.
Like what
An operation sets the IsAlive to False,
Another operation is to change the height to 181,
Well, there's probably an unexpected situation-----The second operation Hget the data before the previous operation Hset the data to the database
Oh.. This is bad.
Originally the previous operation has already killed the person, the result after an operation again will revive the person.
Then, imagine more of the value of concurrent operations ...
Is there a good way to guarantee the atomicity of this JSON?
Reply content:
Suppose I had such a hash stored in Redis
HSET player Mike "{\"height\":180,\"isAlive\":true}"
Then assume that PHP has 2 concurrent operations, are hget this paragraph json,json_decode into a PHP array.
It then operates on one of the value. Json_encode and then hset back.
Like what
An operation sets the IsAlive to False,
Another operation is to change the height to 181,
Well, there's probably an unexpected situation-----The second operation Hget the data before the previous operation Hset the data to the database
Oh.. This is bad.
Originally the previous operation has already killed the person, the result after an operation again will revive the person.
Then, imagine more of the value of concurrent operations ...
Is there a good way to guarantee the atomicity of this JSON?
The problem you're talking about really exists, and the problem you're dealing with here is the universal read-and-write, Redis guarantees the atomicity of each operation, but it doesn't guarantee the atomicity of multiple operations, and the solution is to use the multi and watch commands provided by Redis, using the following:
1.watch Live the key you want to read
2.multi Open Transaction
3. Read the contents of key
4. Modify the value content
5. Update key content
6.exec commits a transaction, if the value of key has changed between 2-6, then an error will be made.
The above is only the largest atomic operation that Redis can provide, but it is not helpful for your problem because the commands executed in the transaction will not get the return value until after the transaction commits, but your changes need to be based on the results from the previous step. Then the transaction does not commit you will not be able to get the original content, so it cannot be updated.
The solution is to save the JSON format string using a REDIS hash structure, update the hash of a key directly under the update, so there will be no concurrency, but if you still need to judge based on the original value and then modify, then the question is the same as above said, Without any change.
Redis is a single-threaded model without worrying about this problem
Hget and Hset Two operations can be controlled with transactions