Kubernetes Study and Practice (ii) installation of kubernetes1.5 and deployment of cluster environment

Source: Internet
Author: User
Tags docker ps etcd k8s

Kubernetes 1.5 installation and cluster environment deployment

Article reprinted from: http://www.cnblogs.com/tynia/p/k8s-cluster.html

Brief introduction:

Docker: An open-source application container engine that can create a lightweight, portable, self-sufficient container for your application.

Kubernetes: The Docker container Cluster Management system, which is open source by Google, provides the functions of resource scheduling, deployment operation, service discovery, capacity scaling and so on for containerized applications.

ETCD: A highly available key-value storage system developed and maintained by CoreOS, primarily for shared configuration and service discovery.

Flannel:flannel is an overlay networking (Overlay Network) tool designed by the CoreOS team for Kubernetes to help each kuberentes host with CoreOS have a complete subnet.

Objective: This paper mainly introduces Kunbernetes (hereinafter referred to as K8S) cluster construction. This article includes:
    1. The construction of ETCD cluster;
    2. Docker installation and configuration (brief introduction);
    3. Flannel installation and configuration (brief introduction);
    4. k8s cluster deployment;
Preparatory work:
Host Running the service role
192.168.39.40 (centos7.1) Etcd
Docker
Flannel
Kube-apiserver
Kube-controller-manager
Kube-scheduler
K8s-master
192.168.39.42 (centos7.1) Etcd
Docker
Flannel
Kubelet
Kube-proxy
Minion1
192.168.39.43 (centos7.1) Etcd
Docker
Flannel
Kubelet
Kube-proxy
Minion2


Installation:

Install ETCD, docker, flannel rpm installation packages under CentOS using Yum, for example:

#yum Install ETCD Flannel docker-y

The installation of ETCD and flannel is relatively simple and has no dependencies. Docker installation because of dependencies, you need to install Docker's dependency package before the installation succeeds. This is not the focus of this article, do not repeat.

On three machines, ETCD, Docker and flannel must be installed

Download Kubernetes 1.5 version of binary package, click to download

Once the download is complete, take a look at the 192.168.39.40 as an example:

12345678 # tar zxvf kubernetes1.5.tar.gz # unpack binary package Code class= "Bash comments" ># cd kubernetes/server # tar zxvf kubernetes-server-linux-amd64.tar.gz  # Install package required to unzip master # CD kubernetes/ server/bin/ # cp kube-apiserver Kube-controller-manager kubectl Kube-scheduler/ Usr/bin   #把master需要的程序, copy to/usr/bin, or set environment variables to achieve the same purpose # SCP Kubelet Kube-proxy [email protected]:~  # Minion required program, SCP sent to minion # SCP Kubelet Kube-proxy [email protected]:~ #

Note that there may be no kubernetes-server-linux-amd64.tar.gz in the Kubernetes/server directory, just download this package and then unzip it in the current directory.

Configuration and Deployment: 1. Configuration and deployment of ETCD modify the ETCD configuration of ETCD in three machines:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152 # [member]ETCD_NAME="etcd-2"ETCD_DATA_DIR="/data/etcd/"#ETCD_WAL_DIR=""#ETCD_SNAPSHOT_COUNT="10000"#ETCD_HEARTBEAT_INTERVAL="100"#ETCD_ELECTION_TIMEOUT="1000"#ETCD_LISTEN_PEER_URLS="http://localhost:2380"   # 去掉默认的配置ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"#ETCD_LISTEN_CLIENT_URLS="http://localhost:2379" # 去掉默认的配置ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""##[cluster]#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.39.42:2380"# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"ETCD_INITIAL_CLUSTER="etcd-1=http://192.168.39.40:2380,etcd-2=http://192.168.39.42:2380,etcd-3=http://192.168.39.43:2380"# 此处的含义为,要配置包含有3台机器的etcd集群ETCD_INITIAL_CLUSTER_STATE="new"#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster1"#ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"ETCD_ADVERTISE_CLIENT_URLS="http://192.168.39.42:2379"#ETCD_DISCOVERY=""#ETCD_DISCOVERY_SRV=""#ETCD_DISCOVERY_FALLBACK="proxy"#ETCD_DISCOVERY_PROXY=""##[proxy]#ETCD_PROXY="off"#ETCD_PROXY_FAILURE_WAIT="5000"#ETCD_PROXY_REFRESH_INTERVAL="30000"#ETCD_PROXY_DIAL_TIMEOUT="1000"#ETCD_PROXY_WRITE_TIMEOUT="5000"#ETCD_PROXY_READ_TIMEOUT="0"##[security]#ETCD_CERT_FILE=""#ETCD_KEY_FILE=""#ETCD_CLIENT_CERT_AUTH="false"#ETCD_TRUSTED_CA_FILE=""#ETCD_PEER_CERT_FILE=""#ETCD_PEER_KEY_FILE=""#ETCD_PEER_CLIENT_CERT_AUTH="false"#ETCD_PEER_TRUSTED_CA_FILE=""##[logging]ETCD_DEBUG="true"# examples for -log-package-levels etcdserver=WARNING,security=DEBUGETCD_LOG_PACKAGE_LEVELS="etcdserver=WARNING"

Modify the service configuration of ETCD in three machines: /usr/lib/systemd/system/etcd.service. The contents of the modified file are:

123456789101112131415161718 [Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyWorkingDirectory=/var/lib/etcd/EnvironmentFile=-/etc/etcd/etcd.confUser=etcd# set GOMAXPROCS to number of processorsExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""Restart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target

Execute on each machine:

1 # Systemctl Enable Etcd.service2 # Systemctl start Etcd.service

Then select a machine on which to perform:

# etcdctl Set/cluster "example-k8s"

Then select another machine to perform:

1 # Etcdctl Get/cluster

If "example-k8s" is returned, the ETCD cluster is successfully deployed.

2. Docker configuration and Deployment Docker configuration modification is relatively simple, mainly to add the local register address: The Docker configuration on each machine (path is /etc/sysconfig/docker), the following configuration items are added:
123 ADD_REGISTRY="--add-registry docker.midea.registry.hub:10050"DOCKER_OPTS="--insecure-registry docker.midea.registry.hub:10050"INSECURE_REGISTRY="--insecure-registry docker.midea.registry.hub:10050"

The above configuration item is the address and service port of the local register, which is useful in the Docker service startup item. For specific register construction, please refer to the previous article.

Modify the service startup configuration entry for Docker in three machines /usr/lib/systemd/system/docker.service。 Modifies the value of the Execstart under the [Service] key. After modification, the service startup configuration content is:
1 [Unit] 2 description=docker application Container Engine 3 documentation=http://docs.docker.com 4 After=network.target 5 Wants=docker-storage-setup.service 6  7 [Service] 8 type=notify 9 Notifyaccess=all10 environmentfile=-/etc/ Sysconfig/docker11 Environmentfile=-/etc/sysconfig/docker-storage12 environmentfile=-/etc/sysconfig/ Docker-network13 Environment=gotraceback=crash
#注意, in CentOS, here is a pit. When Docker starts, SYSTEMD is unable to get to the Docker PID, may cause the later Flannel service to not start, need to add the red part, lets Systemd crawl to the Docker PID
Execstart=/bin/sh-c ' exec-a docker/usr/bin/docker-current daemon \           --exec-opt native.cgroupdriver= Systemd           $OPTIONS           $DOCKER _storage_options           $DOCKER _network_options           $ADD _registry           $ Block_registry           $INSECURE _registry           2>&1 |/usr/bin/forward-journald-tag Docker ' limitnofile= 104857624 limitnproc=104857625 limitcore=infinity26 timeoutstartsec=027 mountflags=slave28 Restart=on-abnormal29 Standardoutput=null30 standarderror=null31 [install]33 wantedby=multi-user.target

Execute on each machine separately:

1 # Systemctl Enable Docker.service2 # Systemctl start Docker

Detecting Docker's running state is simple, performing

1 # Docker PS

See if you can list the individual metadata items for a running container normally (no container run at this time, only the individual metadata items are listed):

# docker Pscontainer ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

3. Configuration and deployment of flannel Modify flannel configuration file /etc/sysconfig/flanneldAdd the ETCD service address and port, flannel configuration subnet information, and log path to the configuration file. Because every machine, there are etcd in operation, so ETCD service address and port, fill in this machine can. The ETCD is automatically synced to the other nodes in the ETCD cluster. After the modification is complete, the contents of the file:
1 # Flanneld configuration options 2  3 # ETCD URL location.  Point the server where ETCD runs 4 flannel_etcd= "http://192.168.39.40:2379" 5  6 # ETCD config key.  This is the configuration key, flannel queries 7 # for address range Assignment 8 flannel_etcd_key= "/k8s/network" #这是一 directories, directories in ETCD 9 # Any additional options which you want to PASS11 flannel_options= "--logtostderr=false--log_dir=/var/log /k8s/flannel/--etcd-endpoints=http://192.168.39.40:2379 "

Then execute:

# Etcdctl Mkdir/k8s/network
The command is to create a directory on ETCD and then execute:
# Etcdctl Set/k8s/network/config ' {"Network": "172.100.0.0/16"} '

The command means that the address of the container instance that expects Docker to run is in the 172.100.0.0/16 network segment

Flanneld reads the config value from the/k8s/network directory, then takes over the address assignment of Docker and bridges the network between Docker and host machine. Flannel service startup configuration does not need to be modified. Perform:
# Systemctl Enable flanneld.service# Systemctl stop Docker # temporarily shuts down the Docker service and automatically pulls up the Docker service when it starts the Flanneld # Systemctl start F Lanneld.service
# Systemctl Start Docker

Command execution is complete, and if there is no error, it will be smooth to pull up Docker. you must first start Flanneld and then start Docker here.

Using Ifconfig to view the network devices in the current system, you will find that the network devices of Docker0 and flannel0 appear in addition to the network interfaces such as Eth0 and Lo, which are themselves:
# ifconfigdocker0:flags=4163<up,broadcast,running,multicast> MTU 1472 inet 172.100.28.1 netmask 255.255.25 5.0 Broadcast 0.0.0.0 Inet6 fe80::42:86ff:fe81:6892 prefixlen @ scopeid 0x20<link> ether 02:42:86: 81:68:92 Txqueuelen 0 (Ethernet) Rx Packets bytes (1.9 KiB) Rx errors 0 dropped 0 overruns 0 f Rame 0 TX Packets Bytes 1994 (1.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0eth0:f Lags=4163<up,broadcast,running,multicast> MTU inet 192.168.39.40 netmask 255.255.255.0 broadcast 172. 20.30.255 inet6 fe80::f816:3eff:fe43:21ac prefixlen-ScopeID 0x20<link> ether Fa:16:3e:43:21:ac t Xqueuelen (Ethernet) Rx packets 13790001 bytes 3573763877 (3.3 GiB) Rx errors 0 dropped 0 overruns 0 Frame 0 TX packets 13919888 bytes 1320674626 (1.2 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 Col Lisions 0flannel0:flags=4305<up,pointopoint,running,noarp,multicast> MTU 1472 inet 172.100.28.0 netmask 255.255.0.0 Destination 1 72.100.28.0 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 Txqueuelen (unspec) RX packets 0 B Ytes 0 (0.0 b) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes (120.0 b) TX error   S 0 dropped 0 overruns 0 carrier 0 collisions 0lo:flags=73<up,loopback,running> MTU 65536 inet 127.0.0.1        Netmask 255.0.0.0 Inet6:: 1 prefixlen scopeid 0x10

As described above, the basic environment is deployed and the next step is to deploy and start the Kubernetes service.

4. Kubenetes Deployment Master: Write the following script, saved as start_k8s_master.sh

#! /bin/sh

# Firstly, start ETCD
Systemctl Restart ETCD

# Secondly, start Flanneld
Systemctl Restart Flanneld

# Then, start Docker
Systemctl Restart Docker

# Start the main server of K8s master
Nohup kube-apiserver--insecure-bind-address=0.0.0.0--insecure-port=8080--cors_allowed_origins=.*--etcd_servers= http://172.20.30.19:4001--v=1--logtostderr=false--log_dir=/var/log/k8s/apiserver--service-cluster-ip-range= 172.100.0.0/16 &

Nohup kube-controller-manager--master=172.20.30.19:8080--enable-hostpath-provisioner=false--v=1--logtostderr= False--log_dir=/var/log/k8s/controller-manager &

Nohup kube-scheduler--master=172.20.30.19:8080--v=1--logtostderr=false--log_dir=/var/log/k8s/scheduler &

Then give execute permission:

# chmod U+x start_k8s_master.sh

Due to the installation of k8s, Kubelet and Kube-proxy have been sent to the Minion machine (we have quietly defined the k8s cluster)

So, write a script that is saved as start_k8s_minion.sh

#! /bin/sh

# Firstly, start ETCD
Systemctl Restart ETCD

# Secondly, start Flanneld
Systemctl Restart Flanneld

# Then, start Docker
Systemctl Restart Docker

# Start the Minion
Nohup kubelet--address=0.0.0.0--port=10250--v=1--log_dir=/var/log/k8s/kubelet--hostname_override=172.20.30.21-- api_servers=http://172.20.30.19:8080--logtostderr=false &

Nohup kube-proxy--master=172.20.30.19:8080--log_dir=/var/log/k8s/proxy--v=1--logtostderr=false &

Then give execute permission:

# chmod U+x start_k8s_minion.sh

Send the script to the host as Minion.

Run k8s on the host as Master, execute:
#./start_k8s_master.sh

On the host as a Minion, execute:

#./start_k8s_minion.sh

On the master host, perform the following:

# Kubectl Get NodeNAME           STATUS    AGE192.168.39.42   ready     5h192.168.39.43   ready     5h172.20.30.21   Ready     5h

Listing the above information indicates that the K8S cluster was successfully deployed.

Reference: Http://kubernetes.io/docs/user-guide/Docker container and Container cloud

Kubernetes Study and Practice (ii) installation of kubernetes1.5 and deployment of cluster environment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.