Implementing a Java distributed lock across JVMs using zookeeper

Source: Internet
Author: User
Tags zookeeper client

Description : This article is to use the curator framework for explanation and demonstration, curator is a package for zookeeper client, because zookeeper client implementation of the lower level, if you want to implement a lock or other functions need their own encapsulation, To implement some simple functions can also, if you want to implement the lock this high concurrency under the things, do not recommend their own encapsulation, unless you are confident that you write something more than the foreign God write good ~ If the study to be able to write their own, but also can look at open source code, there is still a lot of things worth learning.

Zookeeper version is release 3.4.8 (Stable)

Curator version is 2.9.1

<dependency>    <groupId>org.apache.zookeeper</groupId>    <artifactId>zookeeper< /artifactid>    <version>3.4.8</version></dependency><dependency>    <groupid >org.apache.curator</groupId>    <artifactId>curator-recipes</artifactId>    < Version>2.9.1</version></dependency><dependency>    <groupid>org.apache.curator </groupId>    <artifactId>curator-client</artifactId>    <version>2.9.1</version ></dependency>


Lock principle:

1, the first to create a lock root node, such as/mylock.


2, the client wants to acquire the lock under the root node of the lock to create Znode, as a child node of/mylock, the type of node to choose Createmode.persistent_sequential, the name of the node is best to use the UUID (as for why the UUID I will speak behind , first of all-if you do not do this in some cases will be a deadlock, this point I saw a lot of domestic friends own implementation, did not consider this layer, which is why I do not recommend you to encapsulate this lock, because it is really complex), assuming that there are currently 3 clients want to get the lock, then/ The directory under the Mylock should look like this.

xxx-lock-0000000001,xxx-lock-0000000002,xxx-lock-0000000003

XXX is the UUID, 0000000001,0000000002,0000000003 is automatically generated by the Zook service side of the self-increment number.


3, the current client through GetChildren (/mylock) to get all the child node list and according to the number of self-increment to sort, and then determine whether the order of the nodes they created is the smallest in the list, if so get to the lock, if not, then get their own previous node, and set to listen to the change of this node, when the node changes, re-execute Step 3 know that you are the number of the smallest one so far.

Example: Suppose the current client creates a node that is 0000000002 because its number is not the smallest, so it cannot get a lock, then it finds a node 0000000001 in front of it and sets it to listen.


4. Release the lock, the client that is currently getting the lock deletes the node that it created after the operation completes, which will fire the Zook event to other clients so that the other clients will be re-executed (step 3).

Example: Join client 0000000001 gets to the lock, then client 0000000002 joins in to get the lock, finds itself not the smallest number, then it listens to its previous node's events (0000000001 events) and then executes steps (3), When the client 0000000001 operation finishes deleting its own node, the Zook server sends the event, then client 0000000002 receives the event and then repeats step 3 to get to the lock.


The above steps implement an ordered lock, that is, the client that enters the waiting lock first obtains the lock when the lock is available.

If you want to implement a random lock, you only need to change the persistent_sequential to a random number.


Simple example:

Package Com.framework.code.demo.zook;import Org.apache.curator.retrypolicy;import Org.apache.curator.framework.curatorframework;import org.apache.curator.framework.CuratorFrameworkFactory; Import Org.apache.curator.framework.recipes.locks.interprocessmutex;import Org.apache.curator.retry.exponentialbackoffretry;public class Curatordemo {public static void main (string[] args) Throws Exception {//operation failed retry mechanism 1000 millisecond interval retry 3 times retrypolicy retrypolicy = new Exponentialbackoffretry (1000, 3);// Create curator clients Curatorframework client = curatorframeworkfactory.newclient ("192.168.1.18:2181", retrypolicy);// Start Client.start ();/** * This class is thread-safe, a JVM creates a good * Mylock as the root of the lock, we can create different root directories for different businesses */final Interprocessmutex lock = new Int    Erprocessmutex (client, "/mylock");    The try {//blocking method, which gets the lock thread, is suspended.    Lock.acquire ();     System.out.println ("acquired to lock");    Thread.Sleep (10000);    } catch (Exception e) {e.printstacktrace ();}    finally{//Release lock, must be put in finally inside, has ensured that the above method occurs when the exception can also release the lock.    Lock.release (); } thread.sleep (10000); Client.close ();}}

The above code to get the lock again place paused for 10 seconds, we use Zook's client to see the creation of the directory, because I have done several tests before, so the serial number is starting from 12.




Simulate multiple clients (also considered to be multiple JVMs):

Now transform the above code into a thread to execute, simulating multiple client tests.

public class Curatordemo {public static void main (string[] args) throws Exception {for (int i = 0; i <; i++) {//start 10 A thread simulates multiple clients jvmlock JL = new Jvmlock (i); new Thread (JL). Start ();//Add 300 milliseconds here to allow threads to start sequentially, or it is possible that line Line 4 turndown Line 3 will start first, so the test is not allowed. Thread.Sleep (300);}} public static class Jvmlock implements Runnable{private int num;public jvmlock (int num) {this.num = num;} @Overridepublic void Run () {Retrypolicy retrypolicy = new Exponentialbackoffretry (1000,3); Curatorframework client = curatorframeworkfactory.newclient ("192.168.142.128:2181", Retrypolicy); Client.start (); I Nterprocessmutex lock = new Interprocessmutex (client, "/mylock"); try {System.out.println ("I'm the first" + num + "line, I start to acquire the lock"); Lock.acquire (); System.out.println ("I am the first" + num + "line thread, I have acquired the lock"); Thread.Sleep (10000);} catch (Exception e) {e.printstacktrace ();} finally {try {lock.release ();} catch (Exception e) {e.printstacktrace ()}} Client.close ();}}}

Through the client software we can see that 10 application lock nodes have been created.


Looking at the printout, the thread that requested the lock first acquires the lock when the lock is available, because the sequence number of the node that was created when they applied for the lock is incremented, and the client that requested the lock has the lowest node number, so it gets to the lock first.

I'm number No. 0 thread, I'm starting to get the lock I'm the No. 0 thread, I've got the lock I'm the 1th thread, I'm starting to get the lock I'm the 2nd thread, I start to get the lock I'm the 3rd thread, I start to get the lock I'm the 4th thread, I start to get the lock I'm the 5th thread, I I started to get the lock I was the number 7th thread, I started to get the lock I was the 8th thread, I started to get the lock I was the 9th thread, I started to get the lock I am the 1th thread, I have acquired the lock I am the 2nd thread, I have acquired the lock I am the 3rd thread, I have acquired the lock I am the 4th I've got the lock I'm number 5th thread, I've got the lock I'm 6th thread, I've got the lock I'm the 7th thread, I've got the lock I'm the 8th thread, I've got the lock I'm the 9th thread, I've got the lock


Why the name of the node to add UUID, this is the framework of the English explanation.

It turns out there was an edge case that exists when creating sequential-ephemeral nodes. The creation can succeed on the server, but the server can crash before the created node name was returned to the client. However, the ZK session is still valid and the ephemeral node is not deleted. Thus, there is no-on-the-client to determine-what node was created for them. 

Even without sequential-eph Emeral, however, the create can succeed on the sever and the client (for various reasons) would not know it. 

P Utting the Create builder into protection mode works around. The name of the node is created are prefixed with a GUID. IF node creation fails the normal retry mechanism would occur. On the retry, the parent of path is first searched for a node, the have the GUID in it. If that node is found, it's assumed to being the lost node that's successfully created on the first try and are returned to The caller.

That is, when the client creates a node, the process of creating it is successful on the server side of the Zook, but the server side hangs before the path of the node is returned to the client, because the client's session is still valid, so the node is not deleted so that the client does not know which node it is creating.

When the client fails to create a failure, it will retry, if this time Zook is available, then the client will query all child nodes on the server side, and then by the UUID created by the comparison, if found, that the node was created before, so long directly use it, Otherwise this node will become a dead node, causing a deadlock.


To implement a non-fair lock:

Override the method that creates the node,

Package Com.framework.code.demo.zook.lock;import Org.apache.curator.framework.curatorframework;import Org.apache.curator.framework.recipes.locks.standardlockinternalsdriver;import Org.apache.zookeeper.CreateMode; public class Nofairlockdriver extends Standardlockinternalsdriver {/** * random number length */private int numlength; private static in T default_length = 5;public nofairlockdriver () {this (default_length);}    Public nofairlockdriver (int numlength) {this.numlength = Numlength;}    @Override public string Createsthelock (curatorframework client, string path, byte[] locknodebytes) throws Exception        {String NewPath = path + Getrandomsuffix ();        String Ourpath; if (locknodebytes! = null) {//used to be createmode.ephemeral_sequential type node//node name is eventually the case _c_c8e86826-d 3dd-46cc-8432-d91aed763c2e-lock-0000000025//0000000025 is the Zook server the auto-generated self-increment sequence starts from 0000000000//the order in which each client creates nodes are arranged in an ascending order of 0,1,2,3, so the order in which they acquire the locks is consistent with the order in which they enter, which is called the Fair lock//Now we change the ordered number to a random number.The Ourpath = Client.create (). creatingparentcontainersifneeded (). Withprotection (). Withmode (creat)            emode.ephemeral). Forpath (NewPath, locknodebytes); Ourpath = Client.create (). creatingparentcontainersifneeded (). Withprotection (). Withmode (Createmode.ephemeral_        Sequential). Forpath (path, locknodebytes); } else {Ourpath = Client.create (). creatingparentcontainersifneeded (). Withprotection (). Withmode (C            reatemode.ephemeral). Forpath (NewPath); Ourpath = Client.create (). creatingparentcontainersifneeded (). Withprotection (). Withmode (Createmode.ephemeral_        Sequential). Forpath (path);    } return Ourpath;  /** * Gets the random number String */public string Getrandomsuffix () {StringBuilder sb = new StringBuilder (); for (int i = 0; i < numlength; i++) {sb.append ((int) (Math.random () * 10));}    return sb.tostring (); }    }


Register the class we write in:

Interprocessmutex lock = new Interprocessmutex (client, "/mylock", New Nofairlockdriver ());

Or the above example, in the run side to see the results, you can see that the order to obtain the lock is already unordered, thus realizing the non-fair lock.

I'm number 1th thread, I'm starting to get the lock I'm the No. 0 thread, I'm starting to get the lock I'm the No. 0 thread, I've got the lock I'm the 2nd thread, I'm getting the lock I'm the 3rd thread, I'm starting to get the lock I'm the 4th thread, I'm starting to get the lock I'm the 5th thread, I started to get the lock I was the number 7th thread, I started to get the lock I was the 8th thread, I started to get the lock I was the 9th thread, I started to get the lock I am the 9th thread, I have acquired the lock I am the 8th thread, I have acquired the lock I am the 4th thread, I have acquired the lock I am the 7th I've got the lock I'm number 3rd thread, I've got the lock I'm 1th thread, I've got the lock I'm the 2nd thread, I've got the lock I'm the 5th thread, I've got the lock I'm the 6th thread, I've got the lock



Implementing a Java distributed lock across JVMs using zookeeper

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.