An online 2k game, every second is scary. The traditional hibernate direct plug-in library is basically not feasible. I'm going to deduce a lock-free database operation in one step.
1. How to lock in concurrency.
A very simple idea to convert concurrency into a single thread. Java's disruptor is a good example. If you use the Java Concurrentcollection class to do, the principle is to start a thread, run a queue, concurrency, the task is pressed into the queue, thread rotation read the queue, and then executed sequentially.
In this design mode, any concurrency becomes a single-threaded operation and is very fast. Now node. js, or the more common ARPG server, is the design, "cycle" architecture.
In this way, our original system has 2 environments: Concurrent environment + "cycle" environment
The concurrency environment is our traditional lock environment, low performance.
The "cycle" environment is a single-threaded, lock-free environment that we use disruptor to create powerful performance.
2. How to improve processing performance in the "cycle" environment.
Once the concurrency is turned into a single thread, one of the threads will inevitably slow down as soon as the performance problem occurs. Therefore, any operation in a single thread must not involve IO processing. What about the database operation?
Increase the cache. This idea is very simple, read directly from the memory, will inevitably be fast. As for the write, update operation, the use of similar ideas, the operation submitted to a queue, and then run a thread alone to get the plug-in library. This ensures that the "cycle" does not involve IO operations.
The problem recurs:
If our game has only a big loop it is easy to solve because it provides perfect sync without lock.
But in fact the game environment is concurrent and "cycle" coexistence, that is, 2 of the above environment. So no matter how we design, we will inevitably find a lock on the cache block.
3. How does concurrent and "cycle" coexist, eliminating locks?
We know that if you want to avoid the lock operation in "cycle", then we use "async" to give the operation to the thread processing. Combining these 2 features, I changed the database schema a little bit.
The original cache layer, there must be a lock, for example:
Public Tablecache
{
Private hashmap<string, object> caches = new concurrenthashmap<string, object> ();
}
This structure is necessary to ensure the accurate operation of the cache in a concurrent environment. But "cycle" cannot directly manipulate the cache for modification, so a thread must be started to update the cache, for example:
Private static final executorservice EXECUTOR = Executors.newsinglethreadexecutor ();
Executor.execute (new Latencyprocessor (logs));
Class Latencyprocessor implements Runnable
{
public void Run ()
{
This allows you to modify the memory data arbitrarily. Takes the async.
}
}
OK, it looks very beautiful. But then there was a problem. In the process of high-speed access, it is very likely that the cache has not been updated and is being fetched again by other requests, getting the old data.
4. How do I ensure that the data in the concurrency environment is unique and correct?
We know that if there are only read operations and no write operations, then this behavior does not need to be locked.
I use this technique, in the upper level of the cache, and then add a layer of cache, as a "first-class cache", the original will naturally become "level two cache." Kind of like a CPU, right?
The first-level cache can only be modified by "cycle", but may be acquired simultaneously by concurrent, "cycle", so no lock is required.
When a database change occurs, there are 2 cases:
1) in the concurrency environment of the database changes, we are allowed to have a lock exists, so direct operation of level two cache, no problem.
2) in the "cycle" environment, the database changes, first we store the change data in a first-level cache, and then to the asynchronous correction of level two cache, modified to delete the primary cache.
In this way, regardless of the environment in which to read the data, first of all to determine the level of cache, no longer determine the level two cache.
This architecture guarantees absolute accuracy of the memory data.
And what's important: we have an efficient lock-free space to implement our arbitrary business logic.
Finally, there are some tips for improving performance.
1. Since our database operations have been processed asynchronously, there may be a lot of data to be inserted at a certain time, and we can remove some invalid actions by sorting the table, primary key and operation type. For example:
A) Multiple update of the same primary key on the same table, whichever is last.
b) The same primary key as the same table, all previous actions are invalid whenever a delete occurs.
2. Since we want to sort the operations, there is bound to be a time-based sort, how to guarantee the lock-free? Use
Private final static atomiclong _seq = new Atomiclong (0);
can guarantee no lock and global unique self-increment, as a time series.
How to design a lock-free database operation (Java version) in high Concurrency environment reprint