This is a creation in Article, where the information may have evolved or changed.
Try a few more popular SDN, feel flannel or relatively easy to use, here simply record a bit.
virtualbox,3 machines are used, respectively:
Genesis: inet 192.168.99.103/24 BRD 192.168.99.255 scope Global Dynamic ENP0S3
Exodus: inet 192.168.99.105/24 BRD 192.168.99.255 scope Global Dynamic ENP0S3
Leviticus: inet 192.168.99.106/24 BRD 192.168.99.255 scope Global Dynamic ENP0S3
Virtual machine information is as follows
[root@localhostyum.repos.d]# Uname-mars
Linuxlocalhost.localdomain3.10.0-229.el7.x86_64#1 SMP Fri Mar 6 11:36:42 UTC x86_64 x86_64 x86_64 GNU/Linux
[root@localhostyum.repos.d]# Cat/etc/*-release
CentOSLinuxrelease7.3.1611 (Core)
Name= "CentOS Linux" version= "7"
(Core) "id=" CentOS "id_like=" Rhel Fedora "version_id=" 7 "
Retty_name= "CentOS Linux 7 (Core)" ansi_color= "0;31"
Cpe_name= "Cpe:/o:centos:centos:7"
Home_url= "https://www.centos.org/" bug_report_url= "https://bugs.centos.org/"
centos_mantisbt_project= "CentOS-7" centos_mantisbt_project_version= "7" redhat_support_product= "CentOS" REDHAT_ support_product_version= "7"
CentOSLinuxrelease7.3.1611 (Core)
CentOSLinuxrelease7.3.1611 (Core)
[root@localhostyum.repos.d]# Docker version
client:version:1.12.5apiversion:1.24goversion:go1.6.4gitcommit:7392c3bbuilt:fridec1602:23:592016os/arch:linux/ Amd64
Choose two machines to run and ifconfig in the container:
[root@localhost~]# Docker run-it BusyBox
/#ifconfig
Eth0
LINKENCAP:ETHERNETHWADDR02:42:AC:11:00:02INETADDR:172.17.0.2BCAST:0.0.0.0MASK:255.255.0.0INET6ADDR:FE80::42: acff:fe11:2/64scope:linkupbroadcastrunningmulticastmtu:1500metric:1rxpackets:12errors:0dropped:0overruns:0 frame:0txpackets:6errors:0dropped:0overruns:0carrier:0collisions:0txqueuelen:0rxbytes:1016 (1016.0B) TXbytes:508 (508.0B)
Lo
Linkencap:localloopbackinetaddr:127.0.0.1mask:255.0.0.0inet6addr:::1/128scope:hostuploopbackrunningmtu : 65536metric:1rxpackets:0errors:0dropped:0overruns:0frame:0txpackets:0errors:0dropped:0overruns:0carrier:0 collisions:0txqueuelen:0rxbytes:0 (0.0B) txbytes:0 (0.0B)
It is found that the parameters are identical, and there is no cross-host interoperability in bridge mode, and the host mode is not recommended.
Install
First Yum install-y Etcd flannel, if no problem is better.
ETCD 3.x supports--config-file parameters and can be install from source code if required (requires Golang 1.6+).
Start with ETCD, simply say "distributed key value Store".
3 Ways to Etcd a cluster:
Static
ETCD Discovery
DNS Discovery
DNS discovery is mainly with the SRV record, here first do not engage in DNS services, the following on the static and ETCD discovery two ways to explain briefly.
Static
The parameters can be pinned at startup, or written to the configuration file, and the default profile is/etc/etcd/etcd.conf.
The configuration of the Genesis is as follows:
Etcd_name=genesis
Etcd_data_dir= "/var/lib/etcd/genesis"
Etcd_listen_peer_urls= "http://192.168.99.103:2380"
Etcd_listen_client_urls= "http://192.168.99.103:2379,http://127.0.0.1:2379"
Etcd_initial_advertise_peer_urls= "http://192.168.99.103:2380"
Etcd_advertise_client_urls= "http://192.168.99.103:2379"
Etcd_initial_cluster_state= "New"
Etcd_initial_cluster_token= "Etct-fantasy" etcd_initial_cluster= "exodus=http://192.168.99.105:2380,genesis=http ://192.168.99.103:2380 "
The configuration of the exodus is as follows:
Etcd_name=exodusetcd_data_dir= "/var/lib/etcd/exodus" etcd_listen_peer_urls= "http://192.168.99.105:2380" Etcd_ listen_client_urls= "http://192.168.99.105:2379,http://127.0.0.1:2379" etcd_initial_advertise_peer_urls= "http// 192.168.99.105:2380 "etcd_advertise_client_urls=" http://192.168.99.105:2379 "etcd_initial_cluster_state=" new " Etcd_initial_cluster_token= "Etctfantasy"
Etcd_initial_cluster= "exodus=http://192.168.99.105:2380,genesis=http://192.168.99.103:2380"
Start the way to see your preferences, if you intend to start with systemctl, note that the contents of/usr/lib/systemd/system/etcd.service may not be as you wish.
After startup, check the cluster status, and by the way, see which members are:
[Root@localhost etcd]# Etcdctl Cluster-health
Member7a4f27f78a05e755ishealthy:g
OT Healthy resultfromhttp://192.168.99.103:2379
Failed to check the health of MEMBER8E8718B335C6C9A2 on http://192.168.99.105:2379:
Get http://192.168.99.105:2379/health:dial TCP 192.168.99.105:2379:i/o timeoutmember8e8718b335c6c9a2isunreachable: [http://192.168.99.105:2379] is all unreachableclusterishealthy
Hint "member unreachable", seems to be Exodus Firewall stopped, we first rough a bit.
[Root@localhost etcd]# Systemctl Stop Firewalld
[Root@localhost etcd]# Etcdctl Cluster-health
Member7a4f27f78a05e755ishealthy:got Healthy resultfromhttp://192.168.99.103:2379 member8e8718b335c6c9a2ishealthy: Got healthy resultfromhttp://192.168.99.105:2379
Cluster is healthy ETCD discovery
Of course, the premise of this configuration is that you already know the information for each node.
But the actual scene may not be able to predict the individual member, so we need to let etcd ourselves to find (discovery).
First, ETCD provides a public discovery Service-discovery.etcd.io, which we use to generate a discovery token and create a directory in Genesis:
[root@localhostetcd]# Curl Https://discovery.etcd.io/newsize=3
https://discovery.etcd.io/6321c0706046c91f2b2598206ffa3272
[root@localhostetcd]# etcdctl set/discovery/6321c0706046c91f2b2598206ffa3272/_config/size 3
Modify the configuration of the exodus and replace the previous cluster with discovery:
Etcd_name=exodus
Etcd_data_dir= "/var/lib/etcd/exodus"
Etcd_listen_peer_urls= "http://192.168.99.105:2380"
Etcd_listen_client_urls= "http://192.168.99.105:2379,http://127.0.0.1:2379"
Etcd_initial_advertise_peer_urls= "http://192.168.99.105:2380"
Etcd_advertise_client_urls= "http://192.168.99.105:2379"
etcd_discovery=http://192.168.99.103:2379/v2/keys/discovery/98a976dac265a218f1a1959eb8dde57f
If the following error has been displayed after startup (refer to: Raft election):
rafthttp:the clock difference against peer?????? is too high [?????? s > 1s]
The simple workaround is to use NTP:
[Root@localhostetcd]yum Install Ntp-y
[root@localhostetcd]# Systemctl Enable NTPD
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/ntpd.service to/usr/lib/systemd/system/ Ntpd.service.
[root@localhostetcd]# systemctl start ntpd
Flannel
Set a path to flannel by:
Etcdctlset/coreos.com/network/config ' {"Network": "10.1.0.0/16"} '
If the following error occurs when starting with the Systemctl start Flanneld mode
Network.go:53]failedto Retrieve Networkconfig:100:keynotfound (/coreos.net) [9]
Note the content in/etc/sysconfig/flanneld,flannel_etcd_prefix most likely is/atomic.io/network, change it to/coreos.com/network.
Or it can be specified by -etcd-prefix .
After successful startup, view subnet:
[root@localhostetcd]# Etcdctl Ls/coreos.com/network/subnets
Coreos.com/network/subnets/10.1.90.0-24
Coreos.com/network/subnets/10.1.30.0-24
Coreos.com/network/subnets/10.1.18.0-24
Flannel will generate/run/flannel/docker after successful startup, with the following content:
docker_opt_bip= "--BIP=10.1.30.1/24"
Docker_opt_ipmasq= "--ip-masq=true"
docker_opt_mtu= "--mtu=1450"
docker_network_options= "--bip=10.1.30.1/24--ip-masq=true--mtu=1450"
Start Docker in the following ways:
[root@localhostetcd]# Source/run/flannel/docker
[root@localhostetcd]# Docker daemon ${docker_network_options} >>/dev/null 2>&1 &
How did/run/flannel/docker get here?
Refer to the Flanneld two startup parameters,-subnet-dir and -subnet-file.
Genesis into the container to see the effect:
[root@localhostetcd]# Docker run-it BusyBox
#ifconfig
Eth0 Linkencap:ethernethwaddr02:42:0a:01:5a:02inetaddr:10.1.90.2bcast:0.0.0.0mask:255.255.255.0inet6addr:fe80: : 42:aff:fe01:5a02/64scope:linkupbroadcastrunningmulticastmtu:1450metric:1rxpackets:6errors:0dropped:0overruns : 0frame:0txpackets:6errors:0dropped:0overruns:0carrier:0collisions:0txqueuelen:0rxbytes:508 (508.0B) TXbytes:508 (508.0B)
Lo
Linkencap:localloopbackinetaddr:127.0.0.1mask:255.0.0.0inet6addr:::1/128scope:hostuploopbackrunningmtu : 65536metric:1rxpackets:0errors:0dropped:0overruns:0frame:0txpackets:0errors:0dropped:0overruns:0carrier:0 collisions:0txqueuelen:1rxbytes:0 (0.0B) txbytes:0 (0.0B)
In Exodus also do similar operation, in the Exodus ping 10.1.90.2, found is pass.
And ping them in their containers to check for interoperability across hosts.