This is a creation in Article, where the information may have evolved or changed.
In a distributed architecture, service governance is an important issue. In the distributed cluster without service governance, the human disposition is extremely troublesome and error-prone when the service relationship is managed manually or in a configuration way, when the service relationship changes or the service is increased.
Before in a C + + project, the use of zookeeper for service governance, can well maintain the relationship between services, but the use of more cumbersome. Now more and more new projects using Consul for service governance, all aspects of evaluation are better than zookeeper, after a few days of research, here to do a summary.
Comparison of Zookeeper and consul
In terms of language development, Zookeeper uses Java development, which requires the deployment of a Java environment, and consul with Golang development, where all dependencies are compiled into executable programs and plug and Play.
In terms of deployment, zookeeper generally deploys odd nodes to facilitate a simple majority of electoral mechanisms. Consul deployment of the server node and client node (through different startup parameters differentiated), the server node do leader election and data consistency maintenance, the client node deployed on the service machine, as a service program Access Consul interface.
In terms of conformance protocols, Zookeeper uses a custom Zab protocol, Consul's consistency protocol uses a more popular raft.
Zookeeper does not support multi-data centers, consul can support multiple data center deployments across computer rooms, effectively avoiding the inability to access single data center failures.
Link way, zookeeper Client API and server maintain long connection, need service program to manage and maintain link validity, service program Register callback function handle zookeeper event, and maintain the directory structure on the zookeeper (such as temporary node maintenance), Consul use DNS or HTTP to obtain service information, no proactive notification, you need to rotation get.
Tools, zookeeper comes with a CLI_MT tool that allows you to manually manage the directory structure by logging on to the Zookeeper server from the command line. Consul comes with a Web UI management system that can be started with parameters and view information directly in the browser.
Consul related Resources
Executable program Download address: Https://www.consul.io/downloa ...
Official Note document: Https://www.consul.io/docs/in ...
API Documentation: https://godoc.org/github.com/...
Golang API Code Location: GITHUB.COM/HASHICORP/CONSUL/API
On Linux systems, the consul executable is downloaded and copied directly to the/usr/local/bin to be used, without additional configuration.
Service node Start mode:
consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -node=service-center -bind=192.168.0.2 -client 0.0.0.0 -ui -config-dir /etc/consul.d/
Parameter description:
-server indicates starting consul in server node mode
-bootstrap-expect 1 indicates that there are several server nodes that are expected, such as 3 server cluster modes
-data-dir Consul the directory where the data is stored
The name of the-node node
-bind-bound Service IP
-client 0.0.0.0-ui start the Web UI management tool
-CONFIG-DIR Specifies the directory of the service configuration file (all. json files in this directory, read as service profile)
Consul service discovery Mechanism test
To test the Consul service governance approach, set the following scenario:
一个manager类型的服务,需要根据负载来管理若干worker类型的服务并进行业务通信;而worker服务也需要知道manager提供的内部服务接口地址做业务交互。即manger和worker都需要互相知道对方的通信地址。
Make the following rules to set the preparation:
Both the manager and the worker need to register their services with consul and let the other person discover their service address (IP and port)
Using Consul's key-value storage mechanism, the worker periodically updates its payload information to the corresponding Key;manger to obtain payload information from the worker's key, and synchronize the update to local.
Service Type rule: The manager's service type is represented by the string "Manager", and the service type of each worker is represented by the string "worker".
Service Registration ID rule: Service Type-service IP, such as manager-192.168.0.2
Key Building rules: Service type/ip:port, such as worker/192.168.0.2:5400
Stored data in JSON format: {"load": +, "ts": 1482828232}
Golang Test procedure
Package Mainimport ("Encoding/json" "Flag" "FMT" "Github.com/hashicorp/consul/api" "Log" "Math/rand" "Net/http" "OS" "Os/signal" "StrConv" "Strings" "Sync" "Time") type serviceinfo struct {ServiceID St Ring IP string Port int Load int Timestamp int//load updated Ts}type servicelist []servicein Fotype kvdata struct {load int ' JSON: "Load" ' Timestamp int ' json: "TS" '}var (Servics_map = Make (map[str ing]servicelist) Service_locker = new (sync. Mutex) consul_client *api. Client my_service_id String My_service_name string My_kv_key String) Func Checkerr (err error) {If Err ! = Nil {log. Printf ("Error:%v", err) OS. Exit (1)}}func Statushandler (w http. Responsewriter, R *http. Request) {fmt. Println ("Check status.") Fmt. Fprint (W, "Status ok!")} Func startservice (Addr string) {http. Handlefunc ("/status", Statushandler) fmt. Println ("Start listen ...") Err:= http. Listenandserve (addr, nil) Checkerr (err)}func main () {var status_monitor_addr, service_name, Service_ip, consul_addr , found_service string var service_port int flag. Stringvar (&consul_addr, "consul_addr", "localhost:8500", "Host:port of the Service Stuats Monitor interface") flag. Stringvar (&status_monitor_addr, "monitor_addr", "127.0.0.1:54321", "Host:port of the service Stuats monitor Interface ") flag. Stringvar (&service_name, "service_name", "worker", "name of the service") flag. Stringvar (&service_ip, "IP", "127.0.0.1", "Service serve IP") flag. Stringvar (&found_service, "Found_service", "worker", "found the target service") flag. Intvar (&service_port, "Port", 4300, "Service serve port") flag. Parse () My_service_name = Service_Name Doregistservice (consul_addr, Status_monitor_addr, Service_Name, SERVICE_IP, S Ervice_port) Go dodiscover (consul_addr, Found_service) go StartService (status_monitor_addr) Go WaitToUnRegisTService () Go Doupdatekeyvalue (consul_addr, Service_Name, SERVICE_IP, Service_port) Select {}}func Doregistservice (c Onsul_addr string, monitor_addr string, service_name string, IP string, port int) {my_service_id = service_name + "-" + IP var tags []string service: = &api. agentserviceregistration{id:my_service_id, Name:service_name, Port:port, addres S:ip, Tags:tags, Check: &api. agentservicecheck{HTTP: "http//" + monitor_addr + "/status", Interval: "5s", Timeou t: "1s",},} client, err: = API. Newclient (API. Defaultconfig ()) if err! = Nil {log. Fatal (ERR)} consul_client = Client if err: = Consul_client. Agent (). Serviceregister (service); Err! = Nil {log. Fatal (Err)} log. Printf ("Registered service%q in consul with Tags%q", service_name, strings. Join (Tags, ","))}func Waittounregistservice () {quit: = make (chan os). SignAL, 1) signal. Notify (quit, OS. Interrupt, OS. Kill) <-quit if consul_client = = Nil {return} if err: = Consul_client. Agent (). Servicederegister (my_service_id); Err! = Nil {log. Fatal (Err)}}func dodiscover (consul_addr string, found_service string) {t: = time. Newticker (time. Second * 5) for {select {case <-t.c:discoverservices (consul_addr, True, Found_service) }}}func discoverservices (addr string, healthyonly bool, service_name string) {consulconf: = API. Defaultconfig () consulconf.address = addr Client, err: = API. Newclient (consulconf) Checkerr (Err) services, _, Err: = client. Catalog (). Services (&api. queryoptions{}) Checkerr (err) fmt. Println ("--do Discover---:", addr) var sers servicelist for Name: = Range Services {Servicesdata, _, Err: = Client. Health (). Service (Name, "", Healthyonly, &api. queryoptions{}) Checkerr (Err) for _, Entry: = RanGE Servicesdata {if service_name! = entry. Service.service {Continue} for _, Health: = Range entry. Checks {if health. ServiceName! = service_name {Continue} fmt. PRINTLN ("Health Nodeid:", health.) Node, "service_name:", health. ServiceName, "service_id:", health. ServiceID, "Status:", health. Status, "IP:", entry. Service.address, "Port:", entry. Service.port) var node serviceinfo node. IP = entry. Service.address node. Port = entry. Service.port node. ServiceID = health. ServiceID//get data from kv store S: = Getkeyvalue (service_name, node. IP, node. Port) If Len (s) > 0 {var data kvdata err = json. Unmarshal ([]byte (s), &data) If Err = = nil {node. Load = data. Load node. Timestamp = data. Timestamp}} fmt. PRINTLN ("Service node updated IP:", node.) IP, "Port:", node. Port, "Serviceid:", node. ServiceID, "Load:", node. Load, "TS:", node. Timestamp) sers = append (sers, Node)}}} service_locker. Lock () servics_map[service_name] = Sers service_locker. Unlock ()}func doupdatekeyvalue (consul_addr string, service_name string, IP string, port int) {T: = time. Newticker (time. Second *) for {select {<-t.c:storekeyvalue (consul_addr, Service_Name, IP, Port) }}}func Storekeyvalue (consul_addr string, service_name string, IP string, port int) {My_kv_key = My_service _name + "/" + IP + ":" + StrConv. Itoa (Port) var data kvdata data. Load = Rand. INTN (+) data. Timestamp = Int (time. Now (). Unix ()) Bys, _: = json. Marshal (&data) kv: = &api. kvpair{Key:my_kv_key, flags:0, Value:bys, } _, Err: = Consul_client. KV (). Put (kv, nil) checkerr (err) fmt. PRINTLN ("Store Data key:", KV.) Key, "Value:", String (bys))}func getkeyvalue (service_name string, IP string, port int) string {key: = Service_Name + "/" + IP + ":" + StrConv. Itoa (port) kv, _, Err: = Consul_client. KV (). Get (key, nil) if kv = = nil {return "} checkerr (Err) return string (kv. Value)}
The program uses parameters to control the type of service role that it initiates and the type of service that needs to be discovered. The incoming consul_addr is the address of the Native Consul client agent, which is generally loacalhost:8500. Since consul integrates the service health check, the service needs to start a check interface, where an HTTP service is started to respond.
Consul Cluster boot
Start 3 Consul Server:
consul agent -server -bootstrap-expect 3 -data-dir /tmp/consul -node=server001 -bind=10.2.1.54consul agent -server -data-dir /tmp/consul -node=server002 -bind=10.2.1.83 -join 10.2.1.54consul agent -server -data-dir /tmp/consul -node=server003 -bind=10.2.1.80 -join 10.2.1.54
The server001-003 constitutes a consul cluster of 3 server node. Start server001, and specify that 3 server node is required to compose the cluster, server002 and server003 are specified to join (-join) server001 when booting.
Start a Manger:
consul agent -data-dir /tmp/consul -node=mangaer -bind=10.2.1.92 -join 10.2.1.54./service -consul_addr=127.0.0.1:8500 -monitor_addr=127.0.0.1:54321 -service_name=manager -ip=10.2.1.92 -port=4300 -found_service=worker
Start 2 worker:
consul agent -data-dir /tmp/consul -node=worker001 -bind=10.2.1.93 -join 10.2.1.54./service -consul_addr=127.0.0.1:8500 -monitor_addr=127.0.0.1:54321 -service_name=worker -ip=10.2.1.93 -port=4300 -found_service=managerconsul agent -data-dir /tmp/consul -node=worker002 -bind=10.2.1.94 -join 10.2.1.54./service -consul_addr=127.0.0.1:8500 -monitor_addr=127.0.0.1:54321 -service_name=worker -ip=10.2.1.94 -port=4300 -found_service=manager
The service program is a test program that has been compiled by the previous code.
This builds the consul cluster of 3 server node, and the 1 manager and 2 worker distributed service programs, they can find each other, and the manager can get to the worker load situation, realize interoperability.
End
By using Consul's service registration discovery mechanism and key-value storage mechanism, the mechanism of service discovery and manager acquiring the worker service load data is realized. Because the discovery mechanism of consul cannot do more data interaction, data sharing can only be done with key-value mechanism (the data in zookeeper can be stored on nodes). If the business has further requirements, it is possible to extend the stored data structure conveniently.
The above test procedures have both service registration, storage data updates, and service discovery and data acquisition, but the code is much less than the zookeeper mechanism, because zookeeper need to establish and maintain the directory tree, register and handle zookeeper event events, Monitor zookeeper links and handle health management tasks such as re-linking and information reconstruction.
Overall, Consul is much easier to use than zookeeper. Can be tried in new projects, especially the Golang project, the technical stack is also more unified.