Service Discovery System consul

Source: Internet
Author: User
Tags brew cask

Service Discovery System consul
1. What is consul?

Is a service management software.

Supports distributed, highly available, service discovery, and configuration sharing in multiple data centers.

Consul supports health check and allows storage of key-value pairs.

Consistency protocols use the Raft algorithm to ensure high service availability.

The GOSSIP protocol is used for member management and message broadcast, and ACL access control is supported.

ACL is widely used in Routers. It is a packet filter-based traffic control technology. The control list uses the source address, destination address, and port number as the basic element of packet inspection, and can specify whether packets that meet the conditions are allowed to pass.

Gossip is a p2p protocol. His main task is decentralization.

This protocol is used to simulate the spread of rumors in humans. First, there must be a seed node to spread rumors. Each second, the seed node randomly sends its own node list and messages to other nodes. Any newly added node will soon be known to the whole network in this mode of transmission.

What is service registration?

A service registers its location information on the Central Registration node. This service generally registers its host IP address and port number, and sometimes has authentication information for service access, protocol usage, version number, and environment details.

What is service discovery?

Service Discovery allows an application or component to discover information about its runtime environment and other applications or components. You can configure a service discovery tool to separate the actual container from the running configuration. Common configuration information includes ip address, port number, and name.

When a service exists on multiple host nodes, the client determines how to obtain the correct IP address and port.

In traditional cases, when a service exists on multiple host nodes, static configuration is used to register service information.

When a complex system requires strong scalability and services are frequently replaced, dynamic service registration and discovery are important to avoid service interruption.

Related open-source projects: Zookeeper, Doozer, Etcd, and strongly consistent projects. These projects are mainly used for coordination between services and service registration.

What is a strongly consistent protocol?

Read and Write storage objects in a sequence. After the storage objects are updated, the latest values are always read for subsequent accesses. If process A first updates the storage object, the storage system ensures that all subsequent read operations of process A, process B, and process C will return the latest value. The strong consistency model has several common implementation methods, such as master-slave synchronous replication and quorum replication. 2. Specific application scenarios of consul

1. docker and coreos instance registration and configuration sharing

2. vitess Cluster

3. Configuration sharing for SaaS applications

4. integrate with the confd service to dynamically generate the configuration files of the nignx and haproxy

3. Advantages

1. Use the Raft algorithm to ensure consistency, which is more direct than the poxes algorithm. The poxes algorithm used by zookeeper.

Raft divides the entire process into three phases: leader election, log replication, and commit (safety ).

Each server is in three states: leader, follower, and candidate. Under normal circumstances, only one of all servers is a leader, and the other is a follower. Servers communicate with each other through RPC messages. Follower does not initiate RPC messages. The leader and candidate (when the master node is selected) Actively initiate RPC messages.

Select a leader to manage log replication. The leader receives log entries from the client, copies them to other machines in the cluster, and then tells other machines when to apply logs to their state machines. For example, the leader can decide where to place the new entries without asking other servers, and the data will always flow from the leader to other machines. A leader can be fail or lost from other machines. In this case, a new leader will be elected.

Http://www.jdon.com/artichect/raft.html

Http://blog.csdn.net/cszhouwei/article/details/38374603

 

2. Multiple data centers are supported. Intranet and Internet services use different ports for listening. This prevents spof.

Zookeeper and others do not support multi-data center functions

 

3. Support for health check

 

4. Provide a web interface

 

5. Support for http and dns Interfaces

 

4. Installation

My mac OS x

Install using tools:

Brew cask install consul

 

Brew cask installation is also very convenient

Http://brew.sh/#install

5. Test and run consul

Test

Consul

Run consul as a server

Consul agent-server-bootstrap-example CT 1-data-dir/tmp/consul

Consul members

View consul service nodes

Send http request to consul server

$ Curl: localhost: 8500/v1/catalog/nodes
[{"Node": "Armons-MacBook-Air", "Address": "10.1.10.38"}]

 

6. register the service

1. Create a folder/etc/consul. d.

. D indicates that many configuration files are in it.

2. Write the service configuration file to the folder.

For example, $ echo '{"service": {"name": "web", "tags": ["rails"], "port ": 80} '>/etc/consul. d/web. json

3. Restart consul and give the path of the configuration file to consul.

$ Consul agent-server-bootstrap-example CT 1-data-dir/tmp/consul-config-dir/etc/consul. d

4. Query ip addresses and ports

DNS mode: dig @ 127.0.0.1-p 8600 web. service. consul SRV

Http: curl http: // localhost: 8500/v1/catalog/service/web

5. Update

You can use the http api to add, delete, modify, and query the service configuration file. After the update is complete, you can use the signup command to make it take effect.

7. Create a cluster

A consul agent is an independent program. A long-running daemon runs on each node in the concul cluster.

Start a consul agent, but only start an isolated node. If you want to know other nodes in the cluster, add the consul agent to the cluster.

The agent has two modes: server and client. Server mode includes consistency: ensures consistency and availability (in the case of some failures), responds to RPC, and synchronizes data to other node proxies.

The client mode is used to communicate with the server and forward RPC to the agent of the service. It only saves a small amount of status and is very lightweight. It is relatively stateless.

In addition to setting the server/client mode and data path, it is best to set the node name and ip address.

A classic consul architecture image:

The LAN gossip pool contains all nodes in the same LAN, including the server and client. This is basically the same data center DC.

Generally, the WAN gossip pool only contains servers. It will communicate across multiple DC data centers over the Internet or wide area network.

The Leader server is responsible for all RPC requests, queries, and accordingly. Therefore, when other servers receive the client's RPC request, they will forward it to the leader server.

 

First, there is no need to configure the client and server addresses; the discovery is completed automatically. Second, the work of detecting node faults is not placed on the server, but distributed. This makes fault detection more scalable than the daily real heartbeat solution. Third, it is an important event when a message layer notifies, such as the leader election.

Install vagrant and sudo vagrant init to initialize the vagrant environment.

Vagrant up starts a virtual node

View the status of vm startup, including the name of the vm.

Vagrant ssh vm_name log on to the vm Node

Bootstrap mode. node can specify itself as the leader without election. Start other servers in turn and configure the non-bootstrap mode. Finally, stop the first serverbootstrap mode and restart it in Non-bootstrap mode, so that the leader can be automatically elected between servers.

Configure the consul agent on both VMS, as shown in figure

$ Vagrant ssh n1

Vagrant @ n1 :~ $ Consul agent-server-bootstrap-example CT 1 \

-Data-dir/tmp/consul-node = agent-one-bind = 172.20.000010

$ Vagrant ssh n2
Vagrant @ n2 :~ $ Consul agent-data-dir/tmp/consul-node = agent-two \
-Bind = 172.20.000011

In this case, the application consul members is used for query. The two consul nodes are independent and have no association.

Add the client to the server cluster

Vagrant @ n1 :~ $ Consul join 172.20.000011

Use the consul members query to find that a node is added.

Manually

Adding a node to a new node is too troublesome, and a better method is to configure the node to automatically join the cluster.

Consul agent-atlas-join \
-Atlas = ATLAS_USERNAME/infrastructure \

-Atlas-token = "YOUR_ATLAS_TOKEN"

Leave Cluster

Ctrl + c, or kill the specified agent process, you can launch the relevant agent to the Cluster

Run consul. Consul server is recommended to be at least 3 ~ The recommended method is to start one server at the beginning and configure it to the bootstrap mode. In this mode, the node can specify itself as the leader without election. Start other servers in turn and configure the non-bootstrap mode. Finally, stop the first serverbootstrap mode and restart it in Non-bootstrap mode, so that the leader can be automatically elected between servers.

8. query the health status curl http: // localhost: 8500/v1/health/state/critical // If the http interface is used to query failed nodes, is unable to get the returned result dig @ 127.0.0.1-p 8600 web. service. consul
9. K/V Storage consul also provides the key/value storage function. For example, query all K/V curl-v http: // localhost: 8500/v1/kv /? Recurse

Record whose save key is web/key2, flags is 42, and value is true. Curl-x put-d 'test' http: // localhost: 8500/v1/kv/web/key2? Flags = 42
True

DELETE record: curl-x delete http: // localhost: 8500/v1/kv/web/sub? Recurse

Updated value: curl-x put-d 'newval' http: // localhost: 8500/v1/kv/web/key1? Cas = 97
True

Update index: curl "http: // localhost: 8500/v1/kv/web/key2? Index = 101 & wait = 5 s"
Result: [{"CreateIndex": 98, "ModifyIndex": 101, "Key": "web/key2", "Flags": 42, "Value ": "dGVzdA ="}]

For more details about the consul command:
Http://m.oschina.net/blog/353392
10. recovery from power failure outage recover when a server is unavailable, the processing methods are as follows: 1. restore the server and then go online again. to replace the old consul server with a new server, you must use the same ip address as the original one.
3. Add a new server, and the ip address does not need to be the same as the original one. Step: Stop all consul servers, remove damaged Server ip addresses from raft/peer. json, restart other servers, and add new servers to the cluster.

11. How does the discovery of other services work? Each service discovery tool provides an API that allows components to set or search for data. For each component, the addresses discovered by the service are either forcibly encoded inside the program or container, or provided as parameters at runtime. Generally, the service is implemented in the form of a key-Value Pair and interacts with each other using the standard http protocol.

The service discovery portal works in the following way: after each service is launched online, it registers its own information through discovery tools. It records all necessary information for a service to be used by a related component. For example, a MySQL database service registers the ip address and port where it runs. If necessary, the username and password will be left at login.

When a service consumer goes online, it can query related information of the service on a preset terminal. Then it can interact with the required components based on the information found. Server Load balancer is a good example. It can find the number of traffic that each backend node receives through the query service, and then adjust the configuration according to this information.

This can take out the configuration information from the container. One advantage is that it makes the component container more flexible and not limited by specific configuration information. Another advantage is that it makes it easy to interact with a new service instance, and the configuration can be dynamically adjusted by the management tool.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.