Multiple threads to access one copy of the data in ES at the same time, and then update to ES after each change, due to the different order of the threads, may cause the subsequent changes to overwrite the previous modification, obviously we are not allowed to occur in some scenarios this concurrency conflict, such as e-commerce inventory changes
In es, how can you solve this concurrency conflict problem. Optimistic lock concurrency control by _version version number
When the document is created within ES, its _version defaults to 1, and then the deletion and modification operations _version are incremented by 1. You can see the deletion of a document, then the same ID of the document add operation, the version number is 1 instead of initialized to 1, so that the document is not really physically deleted, some of its version number information will exist, but will be purged at some point.
In the ES background, there are many requests that are similar to replica synchronization, which are asynchronous multithreading and are disorderly for multiple modification requests, so the _version optimistic lock is used to control this concurrent request processing. When the subsequent modification request arrives first, after the corresponding modification succeeds _version will add 1, then detects that the previous modification arrives will discard the request directly, but when the subsequent modification request arrives in the normal order, then the _version will be modified and then added 1 on the basis of the previous modification (this time _ Version may be 3, based on the previously modified state.
ES provides an optimistic control scheme for an external version number to replace the internal _version. For example:
。 Version=1&version_type=external
The difference between the _version and the inner is. For intrinsic _version=1, only when the subsequent request satisfies the _version=1 can the update succeed; for external _version=1, only the subsequent request satisfies the _version>1 to be able to modify the success. Optimistic lock concurrency control by pessimistic locking method
1. Global lock, through doc to lock the entire index
A thread creates a lock before it is manipulated, for example:
Put/lockindex/locktype/global/_create
{}
At the same time, if there is another thread to perform the related update operation, then the same execution of the above code will be error. After the thread executes the delete corresponding to Doc, the thread can regain the lock on the doc to perform some of its own column operations.
This way, the operation is simple, but the entire index is locked, resulting in a low concurrency of the entire system.
2.document lock, finer-grained lock
It needs to be done by script:
Post/fs/lock/1/_update
{"
Upsert": {"process_id": 123},
"script": "if (ctx._source.process_id! = Process _ID) {assert false}; Ctx.op = ' noop '; "
" Params ": {
" process_id ": 123
}
}
PROCESS_ID, it is important, in lock, to set the ID of the corresponding DOC lock process, so that when the other process comes over, we know that the data has been locked by someone else.
Assert false, if not the current process locking, throws an exception
ctx.op= ' NoOp ', do not make any changes
Params, there's a process_id in there. The unique ID of the process that you want to perform the delete and change operation
For the same process_id process is possible to modify the doc, but with different process_id to modify the other process_id is locked will be the assert false throw error.
3. Shared lock and exclusive lock
Shared locks: Data is shared, multiple threads can acquire a shared lock on the same data, and then perform read operations on that data
Exclusive lock: Only one thread can get an exclusive lock and then perform an update operation
Shared and exclusive locks are mutually exclusive features, and if a thread wants to modify a data, which is to get an exclusive lock, it needs to wait for all other shared locks to be released before it can be manipulated, and vice versa.
A shared lock is added first, and other threads can read the data:
Judge-lock-2.groovy:if (Ctx._source.lock_type = = ' exclusive ') {assert false}; ctx._source.lock_count++
post/fs/lock/1/_update
{"
Upsert": {
"lock_type": "Shared",
" Lock_count ": 1
},
" script ": {
" lang ":" Groovy ",
" file ":" Judge-lock-2 "
}
}
If other threads also need to acquire a shared lock, then execute the same code as above, and end up with just lock_count plus 1:
GET/FS/LOCK/1
{
"_index": "FS",
"_type": "Lock",
"_id": "1",
"_version": 3,
"found": True,
"_source": {
"Lock_type": "Shared",
"Lock_count": 3
}
}
When adding an exclusive lock:
Put/fs/lock/1/_create
{"Lock_type": "Exclusive"}
The error will be
To unlock a shared lock:
Post/fs/lock/1/_update
{"
script": {
"lang": "Groovy",
"file": "Unlock-shared"
}
}
How many shared locks have been added, and the corresponding number of unlock actions can be completely unlocked. Each unlock lock_count corresponds to minus 1, when 0 is deleted/FS/LOCK/1
The corresponding unblocking lock:
Delete/fs/lock/1