How to build a Kubernetes (k8s) Environment

Source: Internet
Author: User
Tags docker ps etcd k8s

How to build a Kubernetes (k8s) Environment

The official version of k8s is updated too quickly. Many articles on the Internet are outdated and many tools and interfaces have changed. the official website does not have a good understanding. Here we only record the process of setting up the k8s environment and will not talk about many k8s concepts. Therefore, we recommend that you first understand various concepts and then build the environment, then, it is a good learning method to understand various concepts.

Fragment

According to some articles on the internet, k8s has provided the installed version and has a yum source. However, it is not necessary to install it now. decompress the package to make it available.

Address: https://github.com/kubernetes/kubernetes

You can download the source code package and compile it by yourself. However, go is required. In addition, the source code package cannot be compiled within the wall. The reason is that the required image is blocked, so you can directly download the release version. Address: https://github.com/kubernetes/kubernetes/releases

The author uses Release v1.2.0-alpha.6, which is already 496 MB in size, and the previous version of Release v1.1.4 is only 182 MB. It can be seen that there are many fast updates, I have also used version 1.0.1 before, and some interfaces and parameters have changed. For example, the previous kubectl expose parameter is public-ip, but now it is changed to externalIPs, therefore, in practice, you must follow your own version.
 
Environment Description:

2 machines, 167 and 168. The system is CentOS6.5

167 The above will run etcd, flannel, kube-apiserver, kube-controller-manager, kube-scheduler and act as minion, so it will also run kube-proxy and kubelet

On 168, you only need to run etcd, flannel, kube-proxy, kubelet, etcd, and flannel to connect the networks of two machines.

K8s is built on docker, so docker is required.
 
Environment Construction
 
Interconnect Networks

K8s also requires the support of etcd and Flannel. Download the two packages first. Note that both the two machines need to be downloaded and executed.

Wget https://github.com/coreos/etcd/releases/download/v2.2.4/etcd-v2.2.4-linux-amd64.tar.gz
Wget https://github.com/coreos/flannel/releases/download/v0.5.5/flannel-0.5.5-linux-amd64.tar.gz
 
Decompress the package separately and add it to the environment variable.

Cd etcd-v2.2.4-linux-amd64/
Cp etcd etcdctl/usr/bin/
Cd flannel-0.5.5/
Cp flanneld mk-docker-opts.sh/usr/bin

Run

# Run on 167
Etcd-name infra0-initial-advertise-peer-urls http: // 172.16.48.167: 2380-listen-peer-urls http: // 172.16.48.167: 2380-listen-client-urls http: // 172.16.48.167: 2379, http: // 127.0.0.1: 2379-advertise-client-urls http: // 172.16.48.167: 2379-discovery https://discovery.etcd.io/322a6b06081be6d4e89fd6db941c4add -- data-dir/usr/local/kubernete_test/flanneldata>/usr/local/kubernete_test/logs/etcd. log 2> & 1 &
 
# Run on 168
Etcd-name infra1-initial-advertise-peer-urls http: // 203.130.48.168: 2380-listen-peer-urls http: // 203.130.48.168: 2380-listen-client-urls http: // 203.130.48.168: 2379, http: // 127.0.0.1: 2379-advertise-client-urls http: // 203.130.48.168: 2379-discovery https://discovery.etcd.io/322a6b06081be6d4e89fd6db941c4add -- data-dir/usr/local/kubernete_test/flanneldata>/usr/local/kubernete_test/logs/etcd. log 2> & 1 &
 
Note the-discovery parameter in the middle. This is a url address and we can access the https://discovery.etcd.io/new through? Size = 2. size indicates the number of minion instances. Here we use the same url address for 2 or 2 machines. If we access this address, we will find that a json string is returned, we can also build this server by ourselves.
In this way, the startup is successful, and then we can execute it on any machine.

Etcdctl ls
Etcdctl cluster-health
 
To check whether the startup is successful. If there are any errors, you can view the log file.

Tail-n 1000-f/usr/local/kubernete_test/logs/etcd. log
 
Then execute

Etcdctl set/coreos.com/network/config '{"Network": "172.17.0.0/16 "}'

Run

[Root @ w ~] # Etcdctl ls/coreos.com/network/subnets
/Coreos.com/network/subnets/172.17.4.0-24
/Coreos.com/network/subnets/172.17.13.0-24
[Root @ w ~] # Etcdctl get/coreos.com/network/subnets/172.17.4.0-24
{"PublicIP": "203.130.48.168 "}
[Root @ w ~] # Etcdctl get/coreos.com/network/subnets/172.17.13.0-24
{"PublicIP": "203.130.48.167 "}
 
The CIDR block on 167 is 172.17.4.13/24.
168 is 172.17.14.0/24, and the IP addresses of the docker containers created later are in the two network segments respectively.
Then execute

Flanneld>/usr/local/kubernete_test/logs/flanneld. log 2> & 1 &
 
Run the following command on each machine:

Mk-docker-opts.sh-I
Source/run/flannel/subnet. env
Rm/var/run/docker. pid
Ifconfig docker0 $ {FLANNEL_SUBNET}
 
Restart docker

Service docker restart
 
In this way, the network of containers on the two machines is connected, and the results will be displayed later.
Install and start k8s

Wget https://github.com/kubernetes/kubernetes/releases/download/v1.2.0-alpha.6/kubernetes.tar.gz
 
Then various decompress

Tar zxvf <span style = "line-height: 1.5; font-size: 9pt;"> kubernetes.tar.gz </span> cd kubernetes/server
Tar zxvf kubernetes-server-linux-amd64.tar.gz # This is the package we need to execute the command
Cd kubernetes/server/bin/

Copy the command to the environment variable. Here I copied only kubectl

Cp kubectl/usr/bin/

Run

. /Kube-apiserver -- address = 0.0.0.0 -- insecure-port = 8080 -- service-cluster-ip-range = '2017. 16.48.167/24 '-- log_dir =/usr/local/kubernete_test/logs/kube -- kubelet_port = 10250 -- v = 0 -- logtostderr = false -- etcd_servers = http: // users: 2379 -- allow_privileged = false>/usr/local/kubernete_test/logs/kube-apiserver.log 2> & 1 &
 
. /Kube-controller-manager -- v = 0 -- logtostderr = false -- log_dir =/usr/local/kubernete_test/logs/kube -- master = 172.16.48.167: 8080>/usr/local/kubernete_test/logs/kube-controller-manager 2> & 1 &
 
. /Kube-scheduler -- master = '000000. 16.48.167: 8080 '-- v = 0 -- log_dir =/usr/local/kubernete_test/logs/kube>/usr/local/kubernete_test/logs/kube-scheduler.log 2> & 1 &
 
In this way, the master is started,

[Root @ w ~] # Kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
Scheduler Healthy OK
Controller-manager Healthy OK
Etcd-0 Healthy {"health": "true "}
Etcd-1 Healthy {"health": "true "}
 
We can see that all of them are healthy and running.
Then we can happily run the program required by minion on two machines (note that 167 is also minion)

#167
./Kube-proxy -- logtostderr = false -- v = 0 -- master = http: // 172.16.48.167: 8080>/usr/local/kubernete_test/logs/kube-proxy.log 2> & 1 &
 
. /Kubelet -- logtostderr = false -- v = 0 -- allow-privileged = false -- log_dir =/usr/local/kubernete_test/logs/kube -- address = 0.0.0.0 -- port = 10250 -- hostname_override = 172.16.48.167 -- api_servers = http: // 172.16.48.167: 8080>/usr/local/kubernete_test/logs/kube-kubelet.log 2> & 1 &
 
#168
./Kube-proxy -- logtostderr = false -- v = 0 -- master = http: // 172.16.48.167: 8080>/usr/local/kubernete_test/logs/kube-proxy.log 2> & 1 &
 
. /Kubelet -- logtostderr = false -- v = 0 -- allow-privileged = false -- log_dir =/usr/local/kubernete_test/logs/kube -- address = 0.0.0.0 -- port = 10250 -- hostname_override = 172.16.48.97 -- api_servers = http: // 172.16.48.167: 8080>/usr/local/kubernete_test/logs/kube-kubelet.log 2> & 1 &
 
To confirm that the startup is successful

[Root @ w ~] # Kubectl get nodes
NAME LABELS STATUS AGE
172.16.48.167 kubernetes. io/hostname = 172.16.48.167 Ready 1d
172.16.48.168 kubernetes. io/hostname = 172.16.48.168 Ready 18 h
 
Both minion are Ready
Submit command

K8s supports two methods: Command Parameters and configuration files. json and yaml are supported for configuration files, the following describes how to use command parameters.
 
Create rc and pod

Kubectl run nginx -- image = nginx -- port = 80 -- replicas = 5
 
In this way, an rc and five pods are created.
Run the following command to view

Kubectl get rc, pods
 
If we manually delete the created pod, k8s will automatically restart one, always ensure that the number of pods is 5
Cross-machine communication

We use docker ps for viewing on 167 and 168 respectively, and we will find that the nginx containers run on the two machines respectively. We can find a container on the two machines to enter, when you use ip address a to view the IP address, you can find that the ip address can be pinged by pinging the IP address of the other side from 172.17.8.0/24 and 172.17.4.0/24 respectively, it indicates that the network is connected. If the host machine can connect to the Internet, the container can also access the internet.

If we do not start the container through k8, but directly start the container through docker, we will find that the IP address of the started container is within the preceding two IP segments, and the network of the container started with k8 is interconnected.

Of course, randomly assigned IP addresses and Intranet IP addresses will cause us some problems.

For example, we usually do this: Start the container through docker, and assign a fixed IP address through pipework, which can be an intranet IP address or an Internet IP address, in this case, will the containers started by k8s communicate with them?

The answer is that the container started through k8s can access the Intranet IP address and Internet IP address of the Container set by pipework, the containers set by pipework cannot access the containers started by k8s. Although this is the case, it does not affect our general requirements, because the containers started by k8s are web applications, by setting a fixed IP address through pipework, databases and the like can meet the needs of accessing databases from web applications.
 
Expose service

Kubectl expose rc nginx -- port = 80 -- container-port = 9090 -- external-ip = x. x. x.168
 
The port parameter is the container port. Because nginx uses 80, it must be 80
Container-port and target-port indicate the port forwarded by the host. You can specify either or not.
 
External-ip refers to the ip address exposed to the outside. Generally, the public ip address is used. After executing the command, we can access it on the Internet, however, the problem is that the IP address must be the IP address of the machine where k8s is installed. If you use an IP address that is inaccessible, it also makes the application inconvenient.

View service

Kubectl get svc

You can see CLUSTER_IP and EXTERNAL_IP
 
Subsequent Problems

What is the efficiency of using k8s for load balancing? How to keep the session?

Because k8s is not very stable, it may not be very suitable for the production environment.

Kubernetes cluster deployment

OpenStack, Kubernetes, and Mesos

Problems encountered during Kubernetes cluster construction and Solutions

For details about Kubernetes, click here
Kubernetes: click here

This article permanently updates the link address:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.