Reproduced to: http://hi.baidu.com/%CC%D8%B0% AE %C0%B6%C1% AB %BB%A8/blog/item/2ae9efda06e267c9572c841b.html
Installation and configuration details
Zookeeper introduced in this article is based on the stable version 3.2.2. The latest version can be found on the official website.
Http://hadoop.apache.org/zookeeper/to obtain, Zookeeper installation is very simple, the following describes the installation and configuration of zookeeper from the standalone mode and cluster mode.
Standalone Mode
Single-host installation is very simple, as long as you get the zookeeper compressed package and extract to a directory such as:/home/zookeeper-3.2.2, Zookeeper STARTUP script in the bin directory, the STARTUP script in Linux is zkserver. sh. In version 3.2.2, Zookeeper does not provide a STARTUP script in windows. Therefore, to start zookeeper in Windows, you must manually write one, as shown in Listing 1:
Listing 1. zookeeper STARTUP script in Windows
Setlocal set zoo1_dir = % ~ Dp0 % .. \ conf set zoo_log_dir = % ~ Dp0 % .. set zoo_log4j_prop = info, Console SET classpath = % zoo1_dir % set classpath = % ~ Dp0.. \ *; % ~ Dp0. \ Lib \ *; % classpath % set classpath = % ~ Dp0. \ build \ Classes; % ~ Dp0. \ build \ Lib \ *; % classpath % set zoocfg = % zoo1_dir % \ zoo. cfg Set zoomain = org. apache. zookeeper. server. zookeeperservermain Java "-dzookeeper. log. dir = % zoo_log_dir % ""-dzookeeper. root. logger = % zoo_log4j_prop % "-CP" % classpath % "% zoomain %" % zoocfg % "% * endlocal |
Before executing the startup script, you need to configure several basic configuration items. The Zookeeper configuration file is under the conf directory, which contains zoo_sample.cfg and log4j. properties, you need to change zoo_sample.cfg to zoo. CFG, because zookeeper will find this file as the default configuration file at startup. The following describes in detail the meaning of each configuration item in this configuration file.
Ticktime = 2000 datadir = D:/devtools/zookeeper-3.2.2/build clientport = 2181 |
Ticktime: This time is used as the interval between the zookeeper server or between the client and the server to maintain the heartbeat, that is, each ticktime will send a heartbeat. Datadir: As the name implies, it is the directory where zookeeper saves data. By default, Zookeeper also stores log files that write data in this directory. Clientport: the port connecting the client to the zookeeper server. zookeeper listens to the port and accepts access requests from the client.
After these configuration items are configured, you can start zookeeper now. After starting zookeeper, check whether zookeeper is in the service, you can run the netstat-ano command to check whether the clientport number you configured is in the listening service.
Cluster Mode
Zookeeper not only provides services on a single machine, but also supports multi-host cluster creation to provide services. In fact, Zookeeper also supports another pseudo-cluster method, that is, you can run multiple zookeeper instances on one physical machine. The following describes how to install and configure the cluster mode.
The installation and configuration of the zookeeper cluster mode are not very complex. You need to add several configuration items. In cluster mode, the following configuration items are added in addition to the preceding three configuration items:
Initlimit = 5 synclimit = 2 server.1 = 192.168.211.1: 2888: 3888 server.2 = 192.168.211.2: 2888: 3888 |
Initlimit: This configuration item is used to configure zookeeper to accept the client (the client mentioned here is not the client that the user connects to the zookeeper server, but the follower server connected to the leader in the zookeeper server cluster) the maximum heartbeat interval that can be tolerated during connection initialization. When the length of the heartbeat exceeds 10 (ticktime), the zookeeper server does not receive the response from the client, which indicates that the connection to the client fails. The total length of time is 5*2000 = 10 seconds synclimit: This configuration item identifies the leader
The length of time for sending a message, request, and response to a follower. the maximum length of a message is 2*2000 = 4 seconds. A = B: C: D: where A is a number, indicating the number of the server. B is the IP address of the server; c Indicates the port on which the server exchanges information with the leader server in the cluster. D indicates that if the leader server in the cluster fails, a port is required for re-election, select a new leader, which is the port used for communication between servers during the election. For the pseudo cluster configuration method
B is the same, so different zookeeper instance communication port numbers cannot be the same, so assign them different port numbers.
Except for zoo. the CFG configuration file also needs to be configured in cluster mode. This file is under the datadir directory, and there is a data in this file that is the value of A. zookeeper will read this file when it is started, get the data and zoo. compare the configuration information in CFG to determine the server.
Data Model
Zookeeper maintains a hierarchical data structure, which is very similar to a standard file system, as shown in 1:
Figure 1 zookeeper Data Structure
The Zookeeper data structure has the following features:
Each sub-directory item, such as nameservice, is called znode. This znode is uniquely identified by its path. For example, the znode of server1 is/nameservice/server1 znode and can have a sub-node directory, in addition, each znode can store data. Note that a directory node of the ephemeral type cannot have a sub-node directory. znode can have multiple versions of data stored in each znode, that is, a znode that can store multiple copies of data in an access path can be a temporary node. Once the client that creates this znode loses contact with the server
Znode will also be automatically deleted. The Zookeeper client communicates with the server in a persistent connection mode, and each client and server maintain the connection through the heartbeat. This connection status is called session. If znode is a temporary node, if this session is invalid, znode deletes the directory name of znode and can be automatically numbered. If app1 already exists and is created, it is automatically named app2 znode to be monitored, this includes the modification of the data stored in the directory node and the change of the sub-node directory. Once the sub-node directory changes, the monitoring client can be notified. This is the core feature of zookeeper, many functions of zookeeper are implemented based on this feature. examples will be provided in the following typical application scenarios.
Back to Top
How to Use
As a distributed service framework, Zookeeper is mainly used to solve the consistency problem of application systems in Distributed clusters. It can provide data storage based on a directory node tree similar to a file system, however, Zookeeper is not used to store data. It is mainly used to maintain and monitor the status changes of your stored data. By monitoring the changes in the data status, you can achieve data-based cluster management. The following describes some typical problems that zookeeper can solve, zookeeper operation interface and simple example.
Common interface list
To connect to the zookeeper server, the client can create an instance object of org. Apache. zookeeper. zookeeper and then call the interface provided by this class to interact with the server.
As mentioned above, Zookeeper is mainly used to maintain and monitor the status of data stored in a directory node tree. All the operations we can perform on zookeeper are roughly the same as those on the directory node tree, for example, to create a directory node, set data for a directory node, obtain all the sub-directory nodes of a directory node, Set permissions for a directory node, and monitor the status changes of this directory node.
These interfaces are shown in the following table:
Table 1 org. Apache. zookeeper. zookeeper method list
|
|
Stringcreate (string path, Byte [] data, List <ACL> ACL,
Createmode) |
Create a given directory node path and set data for it. createmode identifies four forms of directory nodes: Persistent directory node, the data stored in this directory node will not be lost; persistent_sequential: the directory node that is automatically numbered sequentially. This directory node will be automatically added based on the number of existing nodes. 1, and then return to the directory node name that has been successfully created by the client; ephemeral: temporary directory node. Once the client and server port of this node are created, that is, Session Timeout, this node will be automatically deleted; ephemeral_sequential: Temporary auto-numbered Node |
Statexists (string path, Boolean Watch) |
Determine whether a path exists and set whether to monitor the directory node. Here, watcher is the watcher specified when the zookeeper instance is created. The exists method also has an overload method that can specify a specific Watcher |
Statexists (string path,
Watcher watcher) |
Overload method. Here, a specific watcher is set for a directory node. watcher is a core function in zookeeper. watcher can monitor the data changes of directory nodes and the changes of subdirectories. Once these statuses change, the server will notify all watcher sets on this directory node, so that every client will soon know that the status of the directory node that it is concerned with has changed, and then respond accordingly. |
Void Delete (string path, int Version) |
Delete the directory node corresponding to path. If version is-1, all data of this directory node can be deleted. |
List <string>
Getchildren (string path, Boolean Watch) |
Obtains all sub-directory nodes in the specified path. The getchildren method also has an overload method that allows you to set the status of a specific watcher monitoring subnode. |
Statsetdata (string path, Byte [] data, int Version) |
Set Data for path. You can specify the version number of this data. How can I match any version if version is-1? |
Byte [] Getdata (string path, Boolean watch,
Stat) |
Obtain the data stored in the directory node corresponding to this path. The data version and other information can be specified through STAT. You can also set whether to monitor the data status of this directory node. |
Void Addauthinfo (string scheme, byte [] auth) |
The client submits its own authorization information to the server. The server verifies the access permission of the client based on the authorization information. |
Statsetacl (string path,
List <ACL> ACL, int Version) |
Re-set the access permission for a directory node. Note that the directory node permission in zookeeper does not have the transmission permission, and the permission of the parent directory node cannot be transferred to the sub-directory node. The directory node ACL consists of perms and ID. Perms includes all, read, write, create, delete, and Admin. The ID identifies the list of identities for accessing directory nodes. By default, there are two types: Anyone_id_unsafe = new ID ("world", "anyone") and auth_ids = new ID ("auth", "") indicate that anyone can access and the Creator has access permissions. |
List <ACL>
Getacl (string path,
Stat) |
Obtains the access permission list of a directory node. |
In addition to the methods listed in the above table, there are also some overload methods, such as providing a callback class overload method and setting a specific watcher overload method. For specific methods, refer to Org. apache. zookeeper. zookeeper class API description.
Basic operations
The following is a sample code for basic operations on zookeeper, so that you can have an intuitive understanding of zookeeper. The following list includes creating a connection to the zookeeper server and the most basic data operations:
Listing 2. Basic operation example of zookeeper
// Create a connection zookeeper zk = new Zookeeper ("localhost:" + client_port, clientbase. connection_timeout, New watcher () {// monitor all triggered events public void process (watchedevent event) {system. out. println ("triggered" + event. getType () + "event! ") ;}}); // Create a directory node zk. Create ("/testrootpath ", "Testrootdata ". getbytes (), IDs. open_acl_unsafe, createmode. persistent); // create a sub-directory node zk. create ("/testrootpath/testchildpathone", "testchilddataone ". getbytes (), IDs. open_acl_unsafe, createmode. persistent); system. out. println (new string (zk. getdata ("/testrootpath", false, null ))); // Retrieve the subdirectory node list system. out. println (zk. getchildren ("/testrootpath", true); // modify the zk of the sub-directory node data. setdata ("/testrootpath/testchildpathone", "modifychilddataone ". getbytes (),-1); system. out. println ("directory node status: [" + zk. exists ("/testrootpath", true) + "]"); // create another sub-directory node zk. create ("/testrootpath/testchildpathtwo ", "Testchilddatatwo ". getbytes (), IDs. open_acl_unsafe, createmode. persistent); system. out. println (new string (zk. getdata ("/testrootpath/testchildpathtwo", true, null); // Delete the sub-directory node zk. delete ("/testrootpath/testchildpathtwo",-1); zk. delete ("/testrootpath/testchildpathone",-1 ); // Delete the parent directory node zk. Delete ("/testrootpath",-1); // close the connection zk. Close (); |
The output result is as follows:
The none event has been triggered! Testrootdata [testchildpathone] directory node status: [5, 5,] has triggered the nodechildrenchanged event! Testchilddatatwo has triggered the nodedeleted event! The nodedeleted event has been triggered! |
When the status of the directory node is enabled, the process method of the watcher object is called once the status of the directory node changes.
Back to Top
Typical application scenarios of zookeeper
Zookeeper is a distributed service management framework designed based on the observer mode. It stores and manages data that everyone cares about, and then accepts the registration of the observer, once the status of the data changes, Zookeeper will be responsible for notifying the observer who has registered on zookeeper to respond accordingly, so as to implement the cluster management mode similar to Master/Slave, for more information about the zookeeper architecture, see the zookeeper source code.
The following describes in detail these typical application scenarios, that is, what problems can zookeeper solve? The answer is given below.
Name Service)
In distributed applications, a complete set of naming rules is usually required, which can generate unique names and make it easy for people to recognize and remember. Generally, a tree name structure is an ideal choice, the tree name structure is a hierarchical directory structure, which is user-friendly and does not duplicate. Speaking of this, you may have thought of JNDI. Yes, the name service of zookeeper is similar to that of JNDI. They all associate hierarchical directory structures with certain resources, however, the name service of zookeeper is more broadly associated. You may not need to associate the name with a specific resource. You may only need one name that does not repeat, just like a database that generates a unique digital primary key.
Name Service is already a built-in function of zookeeper. You only need to call the zookeeper API. If you call the create interface, you can easily create a directory node.
Configuration Management)
Configuration Management is very common in distributed application environments. For example, the same application system requires multiple PC servers to run, but some configuration items of the application systems they run are the same, if you want to modify these identical configuration items, you must modify the PC servers running the application system at the same time, which is very troublesome and error-prone.
Configuration information like this can be managed by zookeeper, save the configuration information in a directory node of zookeeper, and then monitor the status of configuration information of all application machines to be modified, once the configuration information changes, each application machine will receive a notification from zookeeper, and then obtain the new configuration information from zookeeper and apply it to the system.
Figure 2. Configuration Management Structure
Group Membership)
Zookeeper can easily implement cluster management. If multiple servers form a service cluster, you must have a "manager" to know the service status of each machine in the current cluster, once a machine cannot provide services, other clusters in the cluster must know and adjust the re-allocation service policy. Similarly, when the service capability of the cluster is increased, one or more servers will be added, and the "manager" must also be known.
Zookeeper not only helps you maintain the service status of machines in the current cluster, but also helps you to select a "manager" for this manager to manage the cluster. This is another zookeeper function leader election.
They are implemented by creating a directory node of the ephemeral type on zookeeper, and then each server calls
Getchildren (string path, Boolean Watch) method and set watch to true. Because it is an ephemeral directory node, when the server that creates it dies, this directory node is also deleted, so
Children will change.
Watch on getchildren will be called, so other servers will know that a server is dead. The same principle applies to new servers.
How to Implement leader election by zookeeper is to select a master server. As before, each server creates an ephemeral directory node. The difference is that it is also a sequential directory node, so it is an ephemeral_sequential directory node. It is the ephemeral_sequential directory node because we can number each server. we can select the server with the smallest number as the master node. If the server with the smallest number dies
The node corresponding to the dead server is also deleted from the ephemeral node. Therefore, a node with the minimum number is displayed in the current node list. We will select this node as the current master node. In this way, the dynamic choice of the master is realized, avoiding the problem that a single master is prone to single point of failure in the traditional sense.
Figure 3. Cluster Management Structure
The sample code for this part is as follows. For the complete code, see the attachment:
Listing 3. Key Leader Election Code
Void findleader () throws interruptedexception {byte [] leader = NULL; try {leader = zk. getdata (root + "/leader", true, null);} catch (exception e) {logger. error (E);} If (Leader! = NULL) {following ();} else {string newleader = NULL; try {byte [] localhost = Inetaddress. getlocalhost (). getaddress (); newleader = zk. create (root + "/leader", localhost, zoodefs. IDs. open_acl_unsafe, createmode. ephemeral);} catch (exception e) {logger. error (E);} If (newleader! = NULL) {leading () ;}else {mutex. Wait ();}}} |
Locks)
Shared locks are easily implemented in the same process, but they cannot be implemented across processes or between different servers. Zookeeper can easily implement this function. The implementation method is to create an ephemeral_sequential directory node for the server that needs to obtain the lock, and then call
The getchildren method obtains whether the smallest directory node in the current directory node list is a self-created directory node. If it is created by itself, it obtains the lock. If not, it calls
The exists (string path, Boolean Watch) method monitors the changes in the directory node list on zookeeper until the node you create is the directory node with the smallest number in the list to obtain the lock, releasing the lock is simple. You just need to delete the directory node created by the previous one.
Figure 4. flowchart of implementing locks using zookeeper
The synchronization lock implementation code is as follows. For the complete code, see the attachment:
Listing 4. key code of synchronization lock
Void getlock () throws keeperexception, interruptedexception {list <string> List = zk. getchildren (root, false); string [] nodes = List. toarray (New String [list. size ()]); arrays. sort (nodes); If (myznode. equals (root + "/" + nodes [0]) {doaction ();} else {waitforlock (nodes [0]); } Void waitforlock (string lower) throws interruptedexception, keeperexception {stat = zk. exists (root + "/" + lower, true); If (stat! = NULL) {mutex. Wait () ;}else {getlock ();}} |
Queue Management
Zookeeper can process two types of Queues:
This queue is available only when all the members of a queue are aggregated. Otherwise, it will wait until all the members arrive. This is a synchronization queue. Queues are queued and queued in FIFO mode, for example, the producer and consumer models are implemented.
The implementation of synchronization queue with zookeeper is as follows:
Create a parent directory/synchronizing. Each member monitors whether the set Watch directory/synchronizing/start exists, and each member joins the queue, the queue is created by creating a temporary directory node for/synchronizing/member_ I, and each member obtains all directory nodes in the/synchronizing directory, that is, member_ I. Determine whether the I value is already the number of Members. If it is smaller than the number of members, wait for/synchronizing/start to appear. If it is already equal, create/synchronizing/start.
The following flowchart is easier to understand:
Figure 5. Synchronization queue Flowchart
The key code of the synchronization queue is as follows. For the complete code, see the attachment:
Listing 5. Synchronization queue
Void addqueue () throws keeperexception, interruptedexception {zk. exists (root + "/start", true); zk. create (root + "/" + name, new byte [0], IDs. open_acl_unsafe, createmode. ephemeral_sequential); synchronized (mutex) {list <string> List = zk. getchildren (root, False); If (list. size () <size) {mutex. wait ();} else {zk. create (root + "/start", new byte [0], IDs. open_acl_unsafe, createmode. persistent );}}} |
When the queue is not full, it enters wait () and waits for the notification from watch. The watch code is as follows:
Public void process (watchedevent event) {If (event. getpath (). equals (root + "/start") & event. getType () = event. eventtype. nodecreated) {system. out. println ("Get notification"); super. process (event); doaction ();}} |
The implementation of FIFO queues using zookeeper is as follows:
The implementation idea is also very simple, that is, to create a sub-directory/queue_ I of the sequential type under a specific directory, so that all Members can be added to the queue with numbers, when an outbound queue is sent, the getchildren () method can return the elements in all the current queues, and then consume the smallest one to ensure FIFO.
The following is a sample code in the queue format of producer and consumer. For the complete code, see the attachment:
Listing 6. Producer code
Boolean produce (int I) throws keeperexception, interruptedexception {bytebuffer B = bytebuffer. allocate (4); byte [] value; B. putint (I); value = B. array (); zk. create (root + "/element", value, zoodefs. IDs. open_acl_unsafe, createmode. persistent_sequential ); Return true ;} |
Listing 7. Consumer Code
Int consume () throws keeperexception, interruptedexception {int retvalue =-1; stat = NULL; while (true) {synchronized (mutex) {list <string> List = zk. getchildren (root, true); If (list. size () = 0) {mutex. wait ();} else {INTEGER min = new INTEGER (list. get (0 ). substring (7 )); For (string S: List) {INTEGER tempvalue = new INTEGER (S. substring (7); If (tempvalue <min) min = tempvalue;} byte [] B = zk. getdata (root + "/element" + min, false, STAT); zk. delete (root + "/element" + min, 0); bytebuffer buffer = bytebuffer. wrap (B); retvalue = Buffer. getint (); Return retvalue ;}}}} |
Summary
Zookeeper, as a sub-project in the hadoop project, is an essential module for hadoop cluster management. It is mainly used to control the data in the cluster, for example, it manages namenode in the hadoop cluster, there are also master election in hbase and State synchronization between servers.
This article introduces the basic knowledge of zookeeper and several typical application scenarios. These are the basic functions of zookeeper. The most important thing is that zoopkeeper provides a good distributed cluster management mechanism, which is a hierarchical directory tree data structure, and effectively manage the nodes in the tree, so that you can design a variety of distributed data management models, not limited to the several common application scenarios mentioned above.