Zookeeper Api (Java) and applications

Source: Internet
Author: User

Zookeeper Api (Java) and application--reprint: How to use

Zookeeper, as a distributed service framework, is mainly used to solve the consistency problem of application system in distributed cluster, it can provide data storage based on directory node tree like file system, but Zookeeper is not used to store data specifically. Its role is primarily to maintain and monitor the changes in the state of the data you store. By monitoring the changes in the status of these data, so that the data-based cluster management can be achieved, the following will be described in detail Zookeeper can solve some of the typical problems, here first introduced, Zookeeper interface and simple use example.

List of common interfaces

The client to connect to the Zookeeper server can be created by Org.apache.zookeeper. An instance object of ZooKeeper, and then invokes the interface provided by this class to interact with the server.

I said earlier. ZooKeeper is primarily used to maintain and monitor the state of data stored in a directory node tree, all of which we can manipulate ZooKeeper as well as the Operations directory node tree, such as creating a directory node, setting up data for a directory node, getting all the subdirectory nodes of a directory node , set permissions for a directory node, and monitor the status changes for this directory node.

These interfaces are shown in the following table:


Table 1 Org.apache.zookeeper. ZooKeeper Method List
Method Name method Function Description

String Create (string path, byte[] data, list<acl> Acl,createmode createmode) Create a given directory node path, and set it to data, Createmode identifies four forms of directory nodes, namely persistent: Persistent directory node, the data stored in this directory node is not lost; Persistent_ Sequential: Sequential auto-numbered directory nodes, which automatically add 1 to the number of nodes that are currently near, and then return to the directory node name that the client has successfully created; ephemeral: temporary directory nodes, once the client and server ports that created the node are Session timeout, this node will be automatically deleted; ephemeral_sequential: Temporary autonumber node
Stat exists (String path, Boolean watch) Determine if a path exists and set whether to monitor the directory node, where the Watcher is the Watcher,exists method specified when creating the ZooKeeper instance and an overloaded method that can specify a specific watcher
Stat exists (String Path,watcher watcher) Overloaded method, where a specific watcher,watcher is set for a directory node in ZooKeeper is a core function, watcher can monitor the data changes of the directory node and the changes of subdirectories, once these states have changed, The server notifies all watcher that are set on this directory node, so that each client quickly knows that the state of the directory node it is interested in changes and responds accordingly
void Delete (String path, int version) Delete the directory node corresponding to path, version 1 can match any version, and delete all the data of this directory node
List<string>getchildren (String path, Boolean watch) Gets all the subdirectory nodes under the specified path, and the same GetChildren method also has an overloaded method to set the state of a specific watcher monitoring child node
Stat setData (String path, byte[] data, int version) Set the data to path, you can specify the version number of this data, if version is 1 how can match any edition
Byte[] GetData (String path, Boolean watch, stat stat) Gets the data stored by the directory node of this path, the data version and other information can be specified by stat, and can also set whether to monitor the status of this directory node data
Voidaddauthinfo (String scheme, byte[] auth) The client submits its own authorization information to the server, and the server verifies the client's access rights based on this authorization information.
Stat SetACL (String path,list<acl> ACL, int version) To reset access to a directory node, it is important to note that the directory node permissions in Zookeeper are not transitive, and the permissions of the parent directory node cannot be passed to the subdirectory node. The directory node ACL consists of two parts: perms and ID.
Perms have all, READ, WRITE, CREATE, DELETE, ADMIN several
The ID identifies the list of identities that access the directory nodes, and by default there are two types:
Anyone_id_unsafe = new ID ("World", "anyone") and auth_ids = new ID ("AUTH", ""), respectively, means that anyone can access and the creator has access.
List<acl>getacl (String path,stat Stat) Get a list of access permissions for a directory node

In addition to the methods listed in the previous table above, there are several overloaded methods, such as the overloaded method for a callback class and an overloaded method that can set specific watcher, which can be referenced in Org.apache.zookeeper. The API description for the ZooKeeper class.

Basic Operations

Here's a sample code for the basic operation ZooKeeper, so you can have a visual understanding of ZooKeeper. The following checklist includes creating a connection to the ZooKeeper server and the most basic data operations:


ZooKeeper Basic Operation Example

 Create a connection to the server ZooKeeper ZK = new ZooKeeper ("localhost:" + client_port, clientbase.connection_timeout, new Watche R () {//monitor all triggered events public void process (Watchedevent event) {System.out.println (" Triggered the "+ event.gettype () +" Event!             ");  }         });  Create a directory node Zk.create ("/testrootpath", "Testrootdata". GetBytes (), Ids.open_acl_unsafe, createmode.persistent); Create a subdirectory node zk.create ("/testrootpath/testchildpathone", "Testchilddataone". GetBytes (), Ids.open_acl_unsafe,  Createmode.persistent);  System.out.println (New String (Zk.getdata ("/testrootpath", False,null));  Remove the subdirectory node list System.out.println (Zk.getchildren ("/testrootpath", true));  Modify the subdirectory node data zk.setdata ("/testrootpath/testchildpathone", "Modifychilddataone". GetBytes (),-1);  SYSTEM.OUT.PRINTLN ("Directory node Status: [" +zk.exists ("/testrootpath", True) + "]"); Create another subdirectory node zk.create ("/testrootpath/testchildpathtwo", "Testchilddatatwo". GetBytes (), Ids.open_acl_unsafe, Createmode.persistent); System.out.println (New String (Zk.getdata ("/testrootpath/testchildpathtwo", True,null));  Delete sub-directory node Zk.delete ("/testrootpath/testchildpathtwo",-1);  Zk.delete ("/testrootpath/testchildpathone",-1);  Delete Parent directory node Zk.delete ("/testrootpath",-1);  Close connection zk.close ();

The results of the output are as follows:

The None event has been triggered! Testrootdata  [Testchildpathone] Directory node Status: [5,5,1281804532336,1281804532336,0,1,0,0,12,1,6] has been triggered Nodechildrenchanged Event! Testchilddatatwo has triggered the nodedeleted event! The nodedeleted event has been triggered!

The process method of the Watcher object is called when the status of the directory node changes while the monitoring state of the directory node is turned on.

Typical application scenarios for ZooKeeper

Zookeeper from a design pattern perspective, it is a distributed service management framework based on the observer pattern design that stores and manages the data that everyone cares about and then accepts the viewer's registration, and once the status of the data changes, Zookeeper will be responsible for notifying Zookeeper registered on the observers to make corresponding response, so as to achieve similar Master/slave management mode in the cluster, about the detailed architecture of Zookeeper and other internal details can read Zookeeper source code

Here's a detailed description of these typical scenarios, which is how Zookeeper can help us solve those problems? The answer is given below.

Unified Naming Services (name service)

In distributed applications, a complete set of naming conventions is often required, both to produce a unique name and to be easily recognizable and remembered, usually with a tree-shaped name structure as an ideal choice, and the tree-shaped name structure is a hierarchical directory structure that is neither friendly nor repetitive. Speaking of which, you might think of Jndi, yes. Zookeeper's name service is similar to what JNDI can do, and they all associate hierarchical directory structures to certain resources, but Zookeeper's name service is more broadly Association, maybe you don't need to associate a name to a specific resource, you might just need a name that doesn't duplicate, just like a unique numeric primary key in the database.

The Name Service is already a built-in feature of Zookeeper, and you can do so just by invoking the Zookeeper API. It is easy to create a directory node if you call the Create interface.

Configuration management (config Management)

Configuration management is common in distributed application environments, where multiple PC servers are required for the same application system, but some of the configuration items of the applications they run are the same, and if you want to modify these same configuration items, you must modify each PC server that is running the application. This is very troublesome and error prone.

Configuration information like this can be left to Zookeeper to manage, the configuration information is stored in a directory node in Zookeeper, and then all the application machine needs to be modified to monitor the configuration information status, once the configuration information changes, each application machine will receive Zookeeper notification , and then get new configuration information from Zookeeper to the system.


Figure 2. Configuration management structure diagram

Cluster Management (Group membership)

Zookeeper can easily realize the function of cluster management, if there are more than one server to form a service cluster, then a "manager" must know the service status of each machine in the current cluster, and once the machine cannot provide the service, other clusters in the cluster must know to make the adjustment and redistribution service policy. Also, when you increase the service capability of a cluster, you add one or more servers, and you must also let "Explorer" know.

Zookeeper not only helps you maintain the service status of the machines in your current cluster, but also helps you select a "manager" to manage the cluster, which is another function of Zookeeper Leader election.

They are implemented by creating a directory node of type ephemeral on Zookeeper, and then each Server invokes the GetChildren (String path, Boolean Watch) method on the parent directory node on which they create the directory node and sets Watch is true, because it is the ephemeral directory node, when the Server that created it dies, the directory node is deleted, so children will change, and Watch on GetChildren will be called, so other servers will know There is already a server dead in the Tao. New Server is the same principle.

Zookeeper How to implement Leader election, which is to choose a Master Server. As in the previous one, each server creates a ephemeral directory node, but it is also a sequential directory node, so it is a ephemeral_sequential directory node. The reason it is a ephemeral_sequential directory node is because we can give each server number, we can choose the server that is currently the smallest number is the Master, if this smallest number of the server died, because it is the ephemeral node, The node for the dead Server is also deleted, so a node with the smallest number appears in the current node list, and we select the node as the current Master. In this way, the dynamic selection master is realized, which avoids the problem of single-point failure in the traditional sense of single-master.


Figure 3. Cluster management structure diagram

The sample code for this section is as follows, and the complete code is shown in the attachment:


Leader election key code

void Findleader () throws Interruptedexception {         byte[] leader = null;         try {             leader = Zk.getdata (root + "/leader", true, null),         } catch (Exception e) {             logger.error (e);         }         if (leader! = NULL) {             following ();         } else {             String newleader = null;             try {                 byte[] localhost = inetaddress.getlocalhost (). getaddress ();                 Newleader = zk.create (root + "/leader", localhost,                 ZooDefs.Ids.OPEN_ACL_UNSAFE, createmode.ephemeral);             } catch (Exception e) {                 logger.error (e);             }             if (Newleader! = null) {                 leading ();             } else {                 mutex.wait ();}         }     

Shared Lock (Locks)

Shared locks are easily implemented in the same process, but not across processes or between different servers. Zookeeper it is easy to implement this function, the implementation is also required to obtain a lock Server to create a ephemeral_sequential directory node, and then call GetChildren method gets the smallest directory node in the current directory node list is not the directory node that it created itself, if it is created by itself, then it acquires this lock, if not then it calls exists (String path, Boolean watch) Method and monitor the changes in the list of directory nodes on Zookeeper, until the node that you create is the smallest directory node in the list, so that the lock is easy to release, as long as you delete the directory node that you created earlier.


Figure 4. Zookeeper implementation of Locks flowchart

The implementation code of the synchronization lock is as follows: The complete code is shown in the attachment.

The key idea of synchronous lock

Locking: ZooKeeper will implement the lock operation as follows: 1) ZooKeeper call the Create () method to create a node with the path format "_locknode_/lock-", which is sequence (continuous) and ephemeral (temporary). That is, the node created is a temporary node, and all nodes are numbered consecutively, which is the "lock-i" format. 2) Call the GetChildren () method on the created lock node to get the smallest numbered node under the lock directory and do not set watch. 3) The node obtained in step 2 is exactly the node created by the client in step 1, then this client obtains this type of lock and then exits the operation. 4) The client calls the exists () method on the lock directory and sets watch to monitor the status of successive temporary nodes in the lock directory that are smaller than themselves. 5) If the Monitor node status changes, skip to step 2nd and continue with subsequent operations until you exit the lock competition. Unlock: ZooKeeper The unlock operation is very simple and the client simply deletes the temporary node created in step 1 of the lock operation.


Key code for Sync lock

void Getlock () throws Keeperexception, interruptedexception{         list<string> List = Zk.getchildren (root, False) ;         string[] nodes = List.toarray (new string[list.size ());         Arrays.sort (nodes);         if (Myznode.equals (root+ "/" +nodes[0])) {             doAction ();         }         else{             Waitforlock (nodes[0]);         }     }     void Waitforlock (String lower) throws Interruptedexception, keeperexception {        stat stat = zk.exists (root + "/" + Lowe R,true);         if (stat! = null) {             mutex.wait ();         }         else{             Getlock ();         }     

Queue Management

Zookeeper can handle two types of queues:

    1. This queue is available when a member of a queue is NAND, otherwise it waits for all members to arrive, which is the synchronization queue.
    2. Queues are queued and out-of-line in a FIFO manner, for example to implement producer and consumer models.

The implementation of the synchronization queue with Zookeeper is as follows:

Create a parent directory/synchronizing, each member of the monitor flag (Set Watch) bit directory/synchronizing/start exists, and then each member joins the queue, the way to join the queue is to create/synchronizing/ Member_i The temporary directory node, and then each member gets/synchronizing all directory nodes of the directory, that is, Member_i. Determine if the value of I is already the number of members, and if it is less than the number of members waiting for/synchronizing/start to appear, create/synchronizing/start if it is already equal.

It is easier to understand with the following flowchart:


Figure 5. Synchronization Queue Flowchart

The key code for the synchronization queue is as follows, and the complete code is shown in the attachment:


Synchronization queue

void Addqueue () throws Keeperexception, interruptedexception{         zk.exists (root + "/start", true);         Zk.create (root + "/" + Name, new byte[0], Ids.open_acl_unsafe,         createmode.ephemeral_sequential);         Synchronized (mutex) {             list<string> List = Zk.getchildren (root, false);             if (List.size () < size) {                 mutex.wait ();             } else {                 zk.create (root + "/start", New byte[0], Ids.open_acl_unsa FE,                 createmode.persistent);             }         }  

When the queue is not full to wait () and then waits for watch's notification, watch's code is as follows:

public void process (Watchedevent event) {         if (Event.getpath (). Equals (Root + "/start") &&         Event.gettype () = = Event.EventType.NodeCreated) {             System.out.println ("Get Notified");             Super.process (event);             DoAction ();         }     

FIFO queue with Zookeeper implementation ideas are as follows:

The idea of implementation is also very simple, that is, in a specific directory to create a sequential type subdirectory/queue_i, so that all members can be added to the queue is numbered, out of the queue through the GetChildren () method can return all the elements of the current queue, Then consume the smallest one, so that the FIFO can be guaranteed.

Below is a sample code for the form of a queue of producers and consumers, the complete code can be seen in the attachment:


Producer Code

Boolean Produce (int i) throws Keeperexception, interruptedexception{         bytebuffer b = bytebuffer.allocate (4);         Byte[] value;         B.putint (i);         Value = B.array ();         Zk.create (root + "/element", Value, ZooDefs.Ids.OPEN_ACL_UNSAFE,                     createmode.persistent_sequential);         return true;     


Consumer Code

 int consume () throws Keeperexception, interruptedexception{int retvalue =-1;         Stat stat = null;                 while (true) {synchronized (mutex) {list<string> List = Zk.getchildren (root, true);                 if (list.size () = = 0) {mutex.wait ();                     } else {integer min = new Integer (list.get (0). SUBSTRING (7));                         for (String s:list) {Integer tempvalue = new Integer (s.substring (7));                     if (Tempvalue < min) min = tempvalue;                     } byte[] B = zk.getdata (root + "/element" + min,false, stat);                     Zk.delete (root + "/element" + min, 0);                     Bytebuffer buffer = Bytebuffer.wrap (b);                     RetValue = Buffer.getint ();                 return retvalue; }             }         }  } 

Summary

Zookeeper, a sub-project in the Hadoop project, is an essential module for Hadoop cluster Management, which is used primarily to control the data in the cluster, such as its management of NameNode in Hadoop clusters, and the Master election in Hbase, S State synchronization between Erver and so on.

This article describes the basics of Zookeeper and describes several typical application scenarios. These are the basic functions of Zookeeper, the most important is that Zoopkeeper provides a good set of distributed cluster management mechanism, it is the hierarchical type of directory tree data structure, and the tree nodes in the effective management, so that can design a variety of distributed data management model, Rather than just a few of the common scenarios mentioned above.

Zookeeper Api (Java) and applications

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.