First, Introduction
"A highly-available key value store for shared configuration and service discovery."
ETCD is a distributed service system developed by CoreOS, and the raft protocol is adopted internally as the consistency algorithm. As a highly available configuration sharing, service discovery key-value storage System, ETCD has the following features:
1) Simple: Simple installation configuration, and the HTTP API to provide interactive, easy to use
2) Security: Support SSL Certificate Verification
3) Fast: According to the official data, single instance supports 2k+ read operation per second, 1k write operation
4) Reliable: The use of raft algorithm to achieve the availability and consistency of distributed system data
ETCD to build its own high-availability clusters There are three main types:
1) Static discovery: Which nodes in the ETCD cluster are known in advance, specifying the node address of the ETCD node directly at boot time.
2) ETCD dynamic discovery: Through the existing ETCD cluster as the data interaction point, and then in the expansion of the new cluster to implement the service discovery through the existing cluster mechanism
3) DNS dynamic discovery: Obtaining additional node address information through DNS queries
This article mainly introduces the first way, the following will introduce the remaining two ways. (Direct Docker installation: QUAY.IO/COREOS/ETCD based on Docker image cluster construction)
Ii. Introduction of the environment
Three virtual machines, the system environment is CENTOS7, the corresponding node name and IP address are as follows:
node1:192.168.7.163
node2:192.168.7.57
etcd2:192.168.7.58
First add this information to the hosts file of three hosts, edit/etc/hosts, and fill in the following information:
192.168. 7.163 node1192.168. 7.57 node2192.168. 7.58 Etcd2
Third, installation, configuration Etcd
Yum Install etcd-y
Yum installed etcd default profile in/etc/etcd/etcd.conf, the following three nodes on the configuration, please note that different points (not posted, it does not need to change)
Node1
# [member]# node name Etcd_name=node1# Data storage location Etcd_data_dir="/var/lib/etcd/default.etcd"#ETCD_WAL_DIR=""#ETCD_SNAPSHOT_COUNT="10000"#ETCD_HEARTBEAT_INTERVAL=" -"#ETCD_ELECTION_TIMEOUT=" +"# Listen to the addresses of other ETCD instances Etcd_listen_peer_urls="http://0.0.0.0:2380"# Listen for client address Etcd_listen_client_urls="http://0.0.0.0:2379,http://0.0.0.0:4001"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""##[cluster]# Notify other ETCD instance addresses Etcd_initial_advertise_peer_urls="http://node1:2380"# ifYou use different etcd_name (e.g. test), set Etcd_initial_cluster value forThis name, i.e."test=http:// ..."# Initialize node address in cluster Etcd_initial_cluster="node1=http://node1:2380,node2=http://node2:2380,etcd2=http://etcd2:2380"# Initialize cluster state, new means new etcd_initial_cluster_state="New"# Initialize cluster Tokenetcd_initial_cluster_token="Mritd-etcd-cluster"# Notify client address Etcd_advertise_client_urls="http://node1:2379,http://node1:4001"
Node2
# [Member]etcd_name=Node2etcd_data_dir="/var/lib/etcd/default.etcd"#ETCD_WAL_DIR=""#ETCD_SNAPSHOT_COUNT="10000"#ETCD_HEARTBEAT_INTERVAL=" -"#ETCD_ELECTION_TIMEOUT=" +"Etcd_listen_peer_urls="http://0.0.0.0:2380"Etcd_listen_client_urls="http://0.0.0.0:2379,http://0.0.0.0:4001"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""##[cluster]etcd_initial_advertise_peer_urls="http://node2:2380"# ifYou use different etcd_name (e.g. test), set Etcd_initial_cluster value forThis name, i.e."test=http:// ..."Etcd_initial_cluster="node1=http://node1:2380,node2=http://node2:2380,etcd2=http://etcd2:2380"etcd_initial_cluster_state="New"Etcd_initial_cluster_token="Mritd-etcd-cluster"Etcd_advertise_client_urls="http://node2:2379,http://node2:4001"
Etcd2
# [Member]etcd_name=Etcd2etcd_data_dir="/var/lib/etcd/default.etcd"#ETCD_WAL_DIR=""#ETCD_SNAPSHOT_COUNT="10000"#ETCD_HEARTBEAT_INTERVAL=" -"#ETCD_ELECTION_TIMEOUT=" +"Etcd_listen_peer_urls="http://0.0.0.0:2380"Etcd_listen_client_urls="http://0.0.0.0:2379,http://0.0.0.0:4001"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""##[cluster]etcd_initial_advertise_peer_urls="http://etcd2:2380"# ifYou use different etcd_name (e.g. test), set Etcd_initial_cluster value forThis name, i.e."test=http:// ..."Etcd_initial_cluster="node1=http://node1:2380,node2=http://node2:2380,etcd2=http://etcd2:2380"etcd_initial_cluster_state="New"Etcd_initial_cluster_token="Mritd-etcd-cluster"Etcd_advertise_client_urls="http://etcd2:2379,http://etcd2:4001"
After you have changed the configuration, turn on the ETCD service on each node:
# systemctl Restart Etcd
Iv. Testing and verification
[Email protected] ~]# Etcdctl set Testdir/testkey000[[Email protected]~]# Etcdctl Set Testdir/testkey111[[Email protected]~]# Etcdctl Set Testdir/testkey222[[Email protected]~]# Etcdctlls/Test/Testdir[[email protected]~]# EtcdctllsTestDir/testdir/Testkey0/testdir/Testkey1/testdir/Testkey2[[email protected]~]# Etcdctl Get testdir/Testkey22[[Email protected]~]# ETCDCTL member List377aa10974e8238d:name=node1 peerurls=http://node1:2380 clienturls=Http://node1: 2379,Http://node1: 4001 isleader=true9DE2D4FDBBD835B6:NAME=ETCD2 peerurls=http://etcd2:2380 clienturls=HTTP://ETCD2: 2379,HTTP://ETCD2: 4001 isleader=falseF75ed833c7cbbe65:name=node2 peerurls=http://node2:2380 clienturls=Http://node2: 2379,Http://node2: 4001 isleader=false
CENTOS7 under ETCD cluster construction