One, zookeeper Authority management mechanism
1.1 Rights Management ACL (Access control List)
ZooKeeper's rights management is the ACL control function, which uses ACLs to control access to the Znode. The implementation of ACLs is very similar to UNIX file access licenses: It uses a license bit to allow or disallow permission control for different operations on a node. However, unlike the standard UNIX license, zookeeper's distinction between user classes is not limited to the owner (owner), Group, Everyone (world) three levels. In zookeeper, the data node has no concept of "owner". The visitor uses the ID to identify himself and to obtain different access rights corresponding to it.
ZooKeeper's rights management is done through server and Client coordination:
(1) server Side
A zookeeper node stores two parts of the content: data and state , and the state contains ACL information. Creating a Znode generates an ACL list in which each ACL includes:
① Permissions perms
② Authentication Mode scheme
③ Specific content Expression:ids
For example, when Scheme= "Digest", Ids is the user name password, which is "ROOT:J0STY9BCUKUBTK1Y8PKBL7QOXSW". The ZooKeeper provides several authentication modes as follows:
①digest: Client driven by username and password verification, e.g. User:pwd
②host: Client driven by host name verification, e.g. localhost
③ip: Client driven by Ip address verification, e.g. 172.2.0.0/24
④world : Fixed user is anyone, open permissions for all client side
When the session is established, the client authenticates itself.
The Permissions permission collection is as follows:
①create allow child node Create operation
②read allows GetChildren and GetData operations on this node
③write Allow SetData operation on this node
④delete allow child node Delete operations
⑤admin Allow setacl operation on this node
In addition, the ZooKeeper Java API supports three standard user rights , namely:
① Zoo_pen_acl_unsafe: All ACLs are completely open, and any application can perform any operation on the node, such as creating, listing, and deleting child nodes.
② Zoo_read_acl_unsafe: For any application, there is only read access.
③ Zoo_creator_all_acl: Grants all rights to the creator of the node. It is important to note that the creator must have authenticated the server before setting this permission.
The Znode ACL permission is represented by an int type number perms, and the 5 bits of perms represent setacl,Delete,create,wRite , READ. Like adcwr=0x1f,----r=0x1,a-c-r=0x15. Note that exists operations and GETACL operations are not controlled by ACL permissions, so any client can query the state of the node and the ACL of the node.
(2) Client
The Client sets the author information for the current session ( for Digest authentication Mode )by calling the Addauthinfo () function. The Server receives an action request sent by the client ( except exists, Getacl ), which requires ACL validation : Encrypts the author plaintext information that the request carries, and the ACL information for the target node is compared, and if the match has the appropriate permissions, the request is rejected by the server.
The following shows an example of setting ACLs for nodes created by Digest (user name: password), with the following code: import org.apache.zookeeper.*; Import Org.apache.Zookeeper.server.auth.DigestAuthenticationProvider; Import org.apache.zookeeper.data.*; Import java.util.*; Publicclass newdigest {publicstaticvoid main (string[] args) throws Exception {//new an ACL list<acl> AC ls = new arraylist<acl> (); Add the first ID, using the username password form IDID1= new Id ("Digest", Digestauthenticationprovider.generatedigest ("Admin:admin")); ACL ACL1 = new ACL (ZooDefs.Perms.ALL,ID1);ACLS. Add (ACL1); Add a second ID, all user-readable permission IDsId2= new Id ("World", "anyone"); ACL Acl2 = new ACL (ZooDefs.Perms.READ,Id2);ACLS. Add (Acl2); Zk with admin authentication, create/test Znode. ZooKeeper Zk = new ZooKeeper ("host1:2181,host2:2181,host3:2181", +, NULL); Zk.addauthinfo ("Digest", "Admin:admin". GetBytes ()); Zk.create ("/test", "Data". GetBytes (),ACLS, createmode.persistent); } }1.2 ZooKeeper superdigest
(1) the way in which the client authenticates an ACL to the Znode is:
a) Traverse all ACLs of the Znode:
① for each ACL, the first action type matches the permission (perms)
② The auth information of the session is matched with the username and password of the ACL only if the matching permission succeeds
b) if the two matches are successful, the operation is allowed; otherwise, the return permission is not enough error (RC=-102)
(2) if any ACL in the Znode ACL list does not have SETACL permissions, then even if the superdigest does not modify its permissions, then if the Znode does not open the Delete permission, then all its child nodes will not be deleted. The only way to do this is by manually deleting the snapshot and log methods, rolling the ZK back to a previous state and then restarting it, which, of course, affects the normal application of other nodes outside of the znode.
(3) superdigest setup steps:
① start ZK (zkserver.sh), add parameter: Java "-dzookeeper. Digestauthenticationprovider.superdigest=super:d/inihsb7yeebrwz8b9l71rjzju= "(no spaces).
② When used by the client, Addauthinfo ("Digest", "Super:test", 10, 0, 0); "Super:test" is the plaintext representation of "super:d/inihsb7yeebrwz8b9l71rjzju=", and the encryption algorithm is the same as SetACL. second, watch mechanism
Zookeeper the client sets up monitoring on the data node, the client receives a reminder when the data node changes. Various read requests in the zookeeper, such as getdate (), GetChildren (), and exists (), can optionally be added "watch". A "watch point" refers to a one -time trigger (trigger) that notifies the client when the monitored data changes.
(1) The monitoring Mechanism has three key points:
① "Monitoring Point" is a one-time, when triggered once, unless reset, the new data changes will not remind the client.
② "Monitoring point" notifies clients that the data will be changed. If the data change is caused by client a , there is no guarantee that the monitoring point notification event will reach client abefore the function that raises the data modification returns.
③ for "monitoring point", Zookeeper has the following guarantee: The client must receive the "Watch event" after receiving the change information of the data.
(2) the " monitoring point " remains on the zookeeper server, and all relevant "monitoring points" that need to be triggered are triggered when the client connects to the new zookeeper server. When the client is disconnected and re-connected, the associated "monitoring point" with it is automatically re-registered, which is transparent to the client. The "monitoring point" is missed in the following cases: Client B sets a "watch point" about the presence of Node A, but B is disconnected, and Node A is created and deleted during the B-break. At this point, B is not aware that a node has been created after the connection.
(3) the "monitoring" mechanism of zookeeper guarantees the following points:
① The triggering order of the "watch" events and the distribution order of the events.
② clients receive a "watch" event before they receive new data
③ The order in which the "monitor" event is triggered is consistent with the order of data changes on the zookeeper server
(4) Note on the zookeeper "monitoring" mechanism:
① "Monitoring Point" is a one-time.
② because the "monitoring point" is one-time, and there is a delay from receiving a "monitoring" event to setting up a new "watch point", the client may not be able to monitor all changes to the data.
③ A monitoring object that is only triggered once by the relevant notification. If a client sets up monitoring of a data point exists and GetData, then when that data is deleted, it will only trigger a "file deleted"
Notice.
④ When a client disconnects from the server, the client can no longer receive a "watch" event until the connection is re-acquired. So the information about the session will be sent to all zookeeper servers. Because the "monitoring" is not received when the connection is broken, the module behavior requires a fault-tolerant design in this case. Third, session mechanism 3.1 Session Overview
A list of servers in the collection body is included in the configuration of each zookeeper client. At startup, the client tries to connect to a server in the list. If the connection fails, it attempts to connect to another server, and so on, until a connection is made to a server successfully or because all zookeeper servers are unavailable.
Figure 3.1 Zookeeper architecture
Once the client establishes a connection to a zookeeper server, the server creates a new session for the client. Each session will have a time-out setting, which is set by the app that created the session. If the server does not receive any requests within the time-out period, the corresponding session expires. Once a session has expired, it cannot be reopened, and any ephemeral znode associated with that session will be lost. Sessions usually persist for a long time, and session expiration is a rare event, but it is still important for the app to handle session expiration.
A ping request (also known as a heartbeat) can be sent through the client to keep the session from expiring as long as a session is idle for a certain amount of time. The ping request is sent automatically by the client library of zookeeper, so there is no need to consider how to maintain the session in our code. This time-length setting should be low enough to allow the file to detect a server failure (reflected by a read timeout) and be able to reconnect to another server within the time period of the session timeout. 3.2 Failover
The zookeeper client can automatically fail over and switch to another zookeeper server. and the key point is that after another server takes over the failed server, all sessions and associated ephemeral znode are still valid. During the failover process , the application receives a notification that the connection is disconnected and connected to the service. Observation notifications cannot be sent when the client disconnects , but these delayed notifications are sent when the client successfully resumes the connection. Of course, when the client is reconnected to another server, the operation will fail if the application attempts to perform an operation. This fully demonstrates the importance of dealing with connection loss anomalies in real zookeeper applications. Iv. Zookeeper Instance status
(1) Zookeeper status
The Zookeeper object undergoes several different states during its life cycle. You can query the state of an object at any time by using the GetState () method:
Public states GetState ()
States is defined as an enumeration type value that represents a different state of the Zookeeper object (regardless of the enumeration value, a zookeeper instance can only be in one state at a time). During an attempt to establish a connection to the zookeeper service, a newly created zookeeper instance is in the connecting state. Once the connection is established , it enters the connected state.
Figure 3.2 Zookeeper state transitions
By registering the observation object, clients that use the Zookeeper object can receive state transition notifications. When entering the connected state, the observed object receives a watchedevent notification, where the value of Keeperstate is syncconnected.
(2) Watch and zookeeper status
Zookeeper's observing objects bear the double responsibility:
① can be used to obtain notification of zookeeper status changes;
② can be used to get notifications about znode changes.
Monitor zookeeper state changes : You can use the Zookeeper object default constructor for observation.
Monitor Znode changes: You can use a dedicated observation object to pass it to the appropriate read operation. You can also use the Boolean identifier in the read operation to set whether to share the default observations.
Zookeeper instances may lose or reconnect to the zookeeper service, switching between connected and connecting states. If the connection is broken, watcher gets a disconnected event. It is important to note that the migration of these States is initiated by the zookeeper instance itself and will automatically attempt to connect automatically if the connection disconnects.
If any of the close () methods are called, or if the session is expired by a keepstate hint of the expired type, zookeeper may turn into a third state closed. Once in the closed state, the Zookeeper object will no longer be active (can be tested using States's IsActive () method) and cannot be reused. The client must establish a new zookeeper instance to reconnect to the zookeeper service.