Kubernetes 1.3 Installation and cluster environment deployment

Source: Internet
Author: User
Tags docker ps etcd k8s

Brief introduction:

Docker: An open-source application container engine that can create a lightweight, portable, self-sufficient container for your application.

Kubernetes: The Docker container Cluster Management system, which is open source by Google, provides the functions of resource scheduling, deployment operation, service discovery, capacity scaling and so on for containerized applications.

ETCD: A highly available key-value storage system developed and maintained by CoreOS, primarily for shared configuration and service discovery.

Flannel:flannel is an overlay networking (Overlay Network) tool designed by the CoreOS team for Kubernetes to help each kuberentes host with CoreOS have a complete subnet.

Objective: This paper mainly introduces Kunbernetes (hereinafter referred to as K8S) cluster construction. This article includes:
    1. The construction of ETCD cluster;
    2. Docker installation and configuration (brief introduction);
    3. Flannel installation and configuration (brief introduction);
    4. k8s cluster deployment;
Preparatory work:
Host Running the service role
172.20.30.19 (centos7.1) Etcd
Docker
Flannel
Kube-apiserver
Kube-controller-manager
Kube-scheduler
K8s-master
172.20.30.21 (centos7.1) Etcd
Docker
Flannel
Kubelet
Kube-proxy
Minion
172.20.30.18 (centos7.1) Etcd
Docker
Flannel
Kubelet
Kube-proxy
Minion
172.20.30.20 (centos7.1) Etcd
Docker
Flannel
Kubelet
Kube-proxy
Minion


Installation:

Download the RPM installation packages for ETCD, Docker, flannel, for example:

Etcd

etcd-2.2.5-2.el7.0.1.x86_64.rpm

Flannel:

flannel-0.5.3-9.el7.x86_64.rpm

Docker

device-mapper-1.02.107-5.el7_2.5.x86_64.rpm docker-selinux-1.10.3-44.el7.centos.x86_64.rpm
device-mapper-event-1.02.107-5.el7_2.5.x86_64.rpm libseccomp-2.2.1-1.el7.x86_64.rpm
device-mapper-event-libs-1.02.107-5.el7_2.5.x86_64.rpm lvm2-2.02.130-5.el7_2.5.x86_64.rpm
device-mapper-libs-1.02.107-5.el7_2.5.x86_64.rpm lvm2-libs-2.02.130-5.el7_2.5.x86_64.rpm
device-mapper-persistent-data-0.5.5-1.el7.x86_64.rpm oci-register-machine-1.10.3-44.el7.centos.x86_64.rpm
docker-1.10.3-44.el7.centos.x86_64.rpm oci-systemd-hook-1.10.3-44.el7.centos.x86_64.rpm
docker-common-1.10.3-44.el7.centos.x86_64.rpm yajl-2.0.4-4.el7.x86_64.rpm
docker-forward-journald-1.10.3-44.el7.centos.x86_64.rpm

The installation of ETCD and flannel is relatively simple and has no dependencies. Docker installation because of dependencies, you need to install Docker's dependency package before the installation succeeds. This is not the focus of this article, do not repeat.

On all four machines, Etcd,docker must be installed, and flannel

Download Kubernetes 1.3 version of binary package, click to download

Once the download is complete, take a look at the 172.20.30.19 as an example:

12345678 # tar zxvf kubernetes1.3.tar.gz # 解压二进制包# cd kubernetes/server# tar zxvf kubernetes-server-linux-amd64.tar.gz  # 解压master所需的安装包# cd kubernetes/server/bin/# cp kube-apiserver kube-controller-manager kubectl kube-scheduler /usr/bin #把master需要的程序,拷贝到/usr/bin下,也可以设置环境变量达到相同目的# scp kubelet kube-proxy [email protected]:~  # 把minion需要的程序,scp发送到minion上# scp kubelet kube-proxy [email protected]:~# scp kubelet kube-proxy [email protected]:~

Configuration and Deployment: 1. ETCD configuration and deployment modifies the ETCD configuration of the ETCD in four machines:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152 # [member]ETCD_NAME="etcd-2"ETCD_DATA_DIR="/data/etcd/"#ETCD_WAL_DIR=""#ETCD_SNAPSHOT_COUNT="10000"#ETCD_HEARTBEAT_INTERVAL="100"#ETCD_ELECTION_TIMEOUT="1000"#ETCD_LISTEN_PEER_URLS="http://localhost:2380"   # 去掉默认的配置ETCD_LISTEN_PEER_URLS="http://0.0.0.0:7001"#ETCD_LISTEN_CLIENT_URLS="http://localhost:2379" # 去掉默认的配置ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""##[cluster]#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.20.30.21:7001"# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"ETCD_INITIAL_CLUSTER="etcd-1=http://172.20.30.19:7001,etcd-2=http://172.20.30.21:7001,etcd-3=http://172.20.30.18:7001,etcd-4=http://172.20.30.20:7001"# 此处的含义为,要配置包含有4台机器的etcd集群ETCD_INITIAL_CLUSTER_STATE="new"#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"#ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"ETCD_ADVERTISE_CLIENT_URLS="http://172.20.30.21:4001"#ETCD_DISCOVERY=""#ETCD_DISCOVERY_SRV=""#ETCD_DISCOVERY_FALLBACK="proxy"#ETCD_DISCOVERY_PROXY=""##[proxy]#ETCD_PROXY="off"#ETCD_PROXY_FAILURE_WAIT="5000"#ETCD_PROXY_REFRESH_INTERVAL="30000"#ETCD_PROXY_DIAL_TIMEOUT="1000"#ETCD_PROXY_WRITE_TIMEOUT="5000"#ETCD_PROXY_READ_TIMEOUT="0"##[security]#ETCD_CERT_FILE=""#ETCD_KEY_FILE=""#ETCD_CLIENT_CERT_AUTH="false"#ETCD_TRUSTED_CA_FILE=""#ETCD_PEER_CERT_FILE=""#ETCD_PEER_KEY_FILE=""#ETCD_PEER_CLIENT_CERT_AUTH="false"#ETCD_PEER_TRUSTED_CA_FILE=""##[logging]#ETCD_DEBUG="false"# examples for -log-package-levels etcdserver=WARNING,security=DEBUG#ETCD_LOG_PACKAGE_LEVELS=""

Modify the service configuration for ETCD in four machines: /usr/lib/systemd/system/etcd.service. The contents of the modified file are:

123456789101112131415161718 [Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyWorkingDirectory=/var/lib/etcd/EnvironmentFile=-/etc/etcd/etcd.confUser=etcd# set GOMAXPROCS to number of processorsExecStart=/bin/bash-c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""Restart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target

Execute on each machine:

1 # Systemctl Enable Etcd.service2 # Systemctl start Etcd.service

Then select a machine on which to perform:

1 # etcdctl Set/cluster "example-k8s"

Then select another machine to perform:

1 # Etcdctl Get/cluster

If "example-k8s" is returned, the ETCD cluster is successfully deployed.

2. Docker configuration and Deployment Docker configuration modification is relatively simple, mainly to add the local register address: The Docker configuration on each machine (path is /etc/sysconfig/docker), the following configuration items are added:
123 ADD_REGISTRY="--add-registry docker.midea.registry.hub:10050"DOCKER_OPTS="--insecure-registry docker.midea.registry.hub:10050"INSECURE_REGISTRY="--insecure-registry docker.midea.registry.hub:10050"

The above configuration item is the address and service port of the local register, which is useful in the Docker service startup item. For specific register construction, please refer to the previous article.

Modifying the service startup configuration entry for Docker in four machines /usr/lib/systemd/system/docker.service。 Modifies the value of Execstart under [Serive]. After modification, the service startup configuration content is:
 1 [Unit] 2 description=docker application Container Engine 3 documentation=http://docs.docker.com 4 After=network.target 5 Wants=docker-storage-setup.service 6 7 [service] 8 type=notify 9 Notifyaccess=all10 environmentfile=-/etc/sysconfig/d Ocker11 Environmentfile=-/etc/sysconfig/docker-storage12 environmentfile=-/etc/sysconfig/docker-network13 Environment=gotraceback=crash14 execstart=/bin/sh-c ' exec-a docker/usr/bin/docker-current daemon \ #注意, in CentOS Yes, Here is a pit. When Docker starts, SYSTEMD is unable to acquire the PID of the Docker, which may cause the subsequent flannel service to fail to start, need to add a red part, so that SYSTEMD can crawl the Docker pid15--exec-opt native.c           Groupdriver=systemd $OPTIONS $DOCKER _storage_options $DOCKER _network_options 19 $ADD _registry $BLOCK _registry $INSECURE _registry 2>&1 | /usr/bin/forward-journald-tag Docker ' limitnofile=104857624 limitnproc=104857625 limitcore=infinity26 timeoutstartsec=027 Mountflags=slave28 restart=on-abnormal29 StandardoutPut=null30 standarderror=null31 [install]33 wantedby=multi-user.target 

Execute on each machine separately:

1 # Systemctl Enable Docker.service2 # Systemctl start Docker

Detecting Docker's running state is simple, performing

1 # Docker PS

See if you can list the individual metadata items for a running container normally (no container run at this time, only the individual metadata items are listed):

# docker Pscontainer ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

3. Configuration and deployment of flannel Modify flannel configuration file /etc/sysconfig/flanneldAdd the ETCD service address and port, flannel configuration subnet information, and log path to the configuration file. Because every machine, there are etcd in operation, so ETCD service address and port, fill in this machine can. The ETCD is automatically synced to the other nodes in the ETCD cluster. After the modification is complete, the contents of the file:
1 # Flanneld configuration options 2  3 # ETCD URL location.  Point the server where ETCD runs 4 flannel_etcd= "http://172.20.30.21:4001" 5  6 # ETCD config key.  This is the configuration key, flannel queries 7 # for address range Assignment 8 flannel_etcd_key= "/k8s/network" #这是一 directories, directories in ETCD 9 # Any additional options which you want to PASS11 flannel_options= "--logtostderr=false--log_dir=/var/log /k8s/flannel/--etcd-endpoints=http://172.20.30.21:4001 "

Then execute:

# Etcdctl Mkdir/k8s/network
The command is to create a directory on ETCD and then execute:
# Etcdctl Set/k8s/network/config ' {"Network": "172.100.0.0/16"} '

The command means that the address of the container instance that expects Docker to run is in the 172.100.0.0/16 network segment

Flanneld reads the config value from the/k8s/network directory, then takes over the address assignment of Docker and bridges the network between Docker and host machine. Flannel service startup configuration does not need to be modified. Perform:
# Systemctl Enable flanneld.service# Systemctl stop Docker # temporarily shuts down the Docker service and automatically pulls up the Docker service when it starts the Flanneld # Systemctl start F Lanneld.service

Command execution is complete, and if there is no error, it will be smooth to pull up Docker.

Using Ifconfig to view the network devices in the current system, you will find that the network devices of Docker0 and flannel0 appear in addition to the network interfaces such as Eth0 and Lo, which are themselves:
# ifconfigdocker0:flags=4163<up,broadcast,running,multicast> MTU 1472 inet 172.100.28.1 netmask 255.255.25 5.0 Broadcast 0.0.0.0 Inet6 fe80::42:86ff:fe81:6892 prefixlen @ scopeid 0x20<link> ether 02:42:86: 81:68:92 Txqueuelen 0 (Ethernet) Rx Packets bytes (1.9 KiB) Rx errors 0 dropped 0 overruns 0 f Rame 0 TX Packets Bytes 1994 (1.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0eth0:f Lags=4163<up,broadcast,running,multicast> MTU inet 172.20.30.21 netmask 255.255.255.0 broadcast 172.2 0.30.255 inet6 fe80::f816:3eff:fe43:21ac prefixlen scopeid 0x20<link> ether Fa:16:3e:43:21:ac TX   Queuelen (Ethernet) Rx packets 13790001 bytes 3573763877 (3.3 GiB) Rx errors 0 dropped 0 overruns 0 Frame 0 TX Packets 13919888 bytes 1320674626 (1.2 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 Coll Isions 0flannel0:flags=4305<up,pointopoint,running,noarp,multicast> MTU 1472 inet 172.100.28.0 netmask 255.255.0.0 Destination 17 2.100.28.0 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 Txqueuelen (unspec) RX packets 0 by  TES 0 (0.0 b) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes (120.0 b) TX errors  0 dropped 0 overruns 0 carrier 0 collisions 0lo:flags=73<up,loopback,running> MTU 65536 inet 127.0.0.1        Netmask 255.0.0.0 Inet6:: 1 prefixlen scopeid 0x10

As described above, the basic environment is deployed and the next step is to deploy and start the Kubernetes service.

4. Kubenetes Deployment Master: Write the following script, saved asstart_k8s_master.sh
1 #! /bin/sh 2  3 # Firstly, start Etcd 4 systemctl restart ETCD 5  6 # Secondly, start Flanneld 7 systemctl Restart Flan Neld 8  9 # Then, start docker10 systemctl Restart Docker11 # start the main server of k8s Master13 nohup kube-apise RVer--insecure-bind-address=0.0.0.0--insecure-port=8080--cors_allowed_origins=.*--etcd_servers=http:// 172.20.30.19:4001--v=1--logtostderr=false--log_dir=/var/log/k8s/apiserver--service-cluster-ip-range= 172.100.0.0/16 &14 nohup Kube-controller-manager--master=172.20.30.19:8080--enable-hostpath-provisioner= False--v=1--logtostderr=false--log_dir=/var/log/k8s/controller-manager &16 nohup kube-scheduler--master= 172.20.30.19:8080--v=1--logtostderr=false--log_dir=/var/log/k8s/scheduler &

Then give execute permission:

# chmod U+x start_k8s_master.sh

Due to the installation of k8s, Kubelet and Kube-proxy have been sent to the Minion machine (we have quietly defined the k8s cluster)

So, write a script that is saved asstart_k8s_minion.sh
1 #! /bin/sh 2  3 # Firstly, start Etcd 4 systemctl restart ETCD 5  6 # Secondly, start Flanneld 7 systemctl Restart Flan Neld 8  9 # Then, start docker10 systemctl Restart Docker11 # start the minion13 nohup kubelet--address=0.0.0.0--p ort=10250--v=1--log_dir=/var/log/k8s/kubelet--hostname_override=172.20.30.21--api_servers=http:// 172.20.30.19:8080--logtostderr=false &14 nohup kube-proxy--master=172.20.30.19:8080--log_dir=/var/log/k8s/ Proxy--v=1--logtostderr=false &

Then give execute permission:

# chmod U+x start_k8s_minion.sh

Send the script to the host as Minion.

Run k8s on the host as Master, execute:
#./start_k8s_master.sh

On the host as a Minion, execute:

#./start_k8s_minion.sh

On the master host, perform the following:

# Kubectl Get NodeNAME           STATUS    AGE172.20.30.18   ready     5h172.20.30.20   ready     5h172.20.30.21   Ready     5h

Listing the above information indicates that the K8S cluster was successfully deployed.

Kubernetes 1.3 Installation and cluster environment deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.