Previously written load Balancing server project, can only configure the node at startup, when the state of the node downtime is able to delete it. But can not be detected in real time node information, especially if the new node to restart the server reconfiguration, this article zookeeper gave me a train of thought.
When the service is more and more large, the number of corresponding machines is also increasing, it is very difficult to manage and maintain the service and address information by manual, and Depending on a single hardware load balancing device or using software schemes such as Lvs.nginx for routing and load balancing, the problem of single point of failure is beginning to become apparent, and all services that depend on it will fail once the service routing or Load balancing server is down.
At this point, you need a place where you can dynamically register and obtain service information. To unify the management of the service name and its corresponding server list information, called the Service Configuration Center, when the service provider starts, it registers the service name and server address provided by it to the new service configuration, and the service consumer obtains the list of machines for the service that needs to be invoked through the Service configuration center. Through the corresponding load balancing algorithm, select one of the servers to invoke. When the server is down or offline, the corresponding machine needs to be able to dynamically overflow from the service configuration center, and notify the corresponding service consumers, otherwise the service consumer may be due to the call to the expired service error, in the process, the service consumer only in the first call service needs to query service configuration center, The information that is queried is then cached locally, and subsequent calls use the locally cached service address list information without having to initiate the Request Channel Service Configuration Center to obtain the appropriate service address list until the service's address list has been changed (the machine is on line or offline). This unstructured structure solves the problem of single point failure caused by the previous load balancing device and greatly reduces the pressure on the service configuration center
Based on zookeeper persistent and non-persistent nodes, we are able to perceive the state of the backend server (on-line, offline, down) through the Zab protocol between clusters, so that the service configuration information can be kept consistent. And zookeeper itself fault-tolerant characteristics and leader election mechanism, can guarantee us to facilitate the expansion, through the zookeeper to achieve dynamic service registration. Machine on-line and offline dynamic perception, expansion convenience, fault tolerance, and no central structure to solve the use of load-balancing equipment before the single point of failure, only when the configuration information update will go to the zookeeper to get the latest list of service addresses, other times use the local cache.
Once the server is disconnected from the zookeeper cluster, the node does not exist, and by registering the corresponding watcher, the service consumer can be informed of the change of the service provider's machine information at the first time, using its znode characteristics and watcher mechanism, As a configuration center for dynamically registering and obtaining service information, unified Management service name and its corresponding server list information, we are able to perceive the back-end server status almost in real time (on-line, off-line, Downtime). Zookeeper cluster through the ZAB Protocol, service configuration information can be consistent, and zookeeper itself fault-tolerant characteristics and leader election mechanism, can guarantee our convenient expansion.
--from "Large distributed Web site architecture design and Practice"
Turn from: http://blog.csdn.net/yusiguyuan/article/details/47682537