Public class clhspinlock {private final threadlocal <node> Pred; private final threadlocal <node> node; private final atomicreference <node> tail = new atomicreference <node> (new node ()); public clhspinlock () {This. node = new threadlocal <node> () {protected node initialvalue () {return new node () ;}}; this. PRED = new threadlocal <node> () {protected node initialvalue () {return NULL ;};} public void lock () {final node = This. node. get (); node. locked = true; node Pred = This. tail. getandset (node); this. pred. set (Pred); While (Pred. locked) {}} public void unlock () {final node = This. node. get (); node. locked = false; this. node. set (this. pred. get ();} Private Static class node {private volatile Boolean locked ;}}
the logic is not complex: for the lock operation, you only need to use a CAS operation to add the node corresponding to the current thread to the queue, and get the reference of the predecessor node at the same time, then, wait for the predecessor to release the lock. For the unlock operation, you only need to set the locked member variable of the corresponding node of the current thread to false. The main purpose of this. node. Set (this. Pred. Get () in the unlock method is to reuse the Node object on predecessor, which is a GC-friendly optimization. If you do not consider this optimization, this. node. Set (new node () is also possible. Compared with the TAS (test and set) spin lock and ttas (test and set) spin lock, clh spin lock mainly solves the problem of cache-coherence traffic: when every thread is in the busy loop, it does not compete for the same state, but only determines the lock status of its corresponding predecessor. If you are worried about false sharing, you can consider locking the padding state to the length of the cache line. In addition, clh spin lock ensures the fairness of lock competition through FIFO queues.