Kubernetes (hereinafter referred to as "k8s") is now recognized as the most advanced container cluster management tool, after the release of the 1.0 release, the development of k8s more rapidly, and has been the full support of the container ecosystem manufacturers, including coreos, rancher, etc., Many vendors that provide public cloud services also provide container services based on k8s for two development to support the infrastructure layer, such as . It can be said that k8s is also the most powerful competitor of Docker in the field of container cluster management and service Orchestration.
Now the red Hat Centos7 users, already can use the familiar yum to install k8s directly, but really want to install up, still have quite a lot of pit to tread, this article mainly to help everybody fill these big pits!!!
Because k8s involves a lot of components, about the components of learning, online tutorials a lot, but still recommend everyone to the official web to Learn. http://kubernetes.io/docs/tutorials/
first, Environment Construction
In k8s, the host has only two roles of master and Nodes. I have prepared 3 Centos7 host, one of which is both master and nodes, and the other two do Nodes.
The components that master installs Are:
-
Docker
-
ETCD can be understood as a k8s database that stores all nodes, pods, Network information
-
Kube-proxy basic components that provide service services
-
Kubelet manages the components of the k8s node because this master is also nodes, so install
-
Kube-apiserver K8s provides API interface, is the core of the entire k8s
-
Kube-controller-manager managing components that allocate resources
-
Kube-scheduler Components for scheduling resources
-
Flanneld network components for the entire k8s
The components installed by nodes Are:
-
Docker
-
Kube-proxy
-
Kubelet
-
Flanneld
Next is the construction of each node network, modify the/etc/hosts, to ensure that the hosts table of each host is consistent, as Follows:
Echo "192.168.128.160 Centos-master
192.168.128.161 centos-minion-1
192.168.128.162 centos-minion-2 ">>/etc/hosts
finally, Remember to close the selinux of each node to avoid unnecessary problems. The firewall also shuts down to avoid firewall conflicts with the inside of the Docker Container.
[email protected] ~]# systemctl Stop Firewalld [email protected] ~]# systemctl Disable FIREWALLD |
second, Start the installation
1. First add the k8s component to the Yum source that is downloaded in the node, as Follows:
Cat <<EOF>/etc/yum.repos.d/virt7-docker-common-release.repo <<-' EOF ' [virt7-docker-common-release] Name=virt7-docker-common-release baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/ Gpgcheck=0 Eof |
Then perform yum-y update
2. The next step is to install the k8s component on each Node.
Master
Yum install-y Docker etcd kube-proxy kubelet kube-apiserver kube-controller-manage kube-scheduler Flanneld
Nodes
Yum install-y etcd kube-proxy kubelet Flanneld
After installation, you can view the corresponding configuration file under/etc/kubernetes, and then we will begin to configure these configuration Files. There is a pit here, because we will install the K8s UI later, but the UI using k8s must use Certificate authentication, and using the Default/etc/kubernetes/apiserver profile does not enable this function properly (i tried many times without success, have success to tell me), This is the way I use the CLI for both Kube-apiserver and Kube-controller-manage Components.
Kube-apiserver:
/usr/bin/kube-apiserver--logtostderr=true--v=0--etcd-servers=http://centos-master:2379--address=0.0.0.0--port= 8080--kubelet-port=10250--allow-privileged=true--service-cluster-ip-range=10.254.0.0/16--admission-control= serviceaccount--insecure-bind-address=0.0.0.0--client-ca-file=/root/security/ca.crt--tls-cert-file=/root/ Security/server.crt--tls-private-key-file=/root/security/server.key--basic-auth-file=/root/security/basic_ auth.csv--secure-port=443 &>>/var/log/kubernetes/kube-apiserver.log &
Kube-controller-manage:
/usr/bin/kube-controller-manager--logtostderr=true--v=0--master=http://centos-master:8080--root-ca-file=/root/ Security/ca.crt--service-account-private-key-file=/root/security/server.key & >>/var/log/kubernetes/ Kube-controller-manage.log &
Everyone has to see me here in/root/security under the specified private key and digital certificate, here we have not generated, so do not directly typed the above command, we first configure the other configuration File.
Etcd
Edit/etc/etcd/etcd.conf
etcd_listen_peer_urls= "http://localhost:2380"
etcd_listen_client_urls= "http://0.0.0.0:2379"
etcd_advertise_client_urls= "http://0.0.0.0:2379"
Flanneld:
Edit/etc/sysconfig/flanneld
flannel_etcd_endpoints= "http://centos-master:2379"
flannel_etcd_prefix= "/kube-centos/network" (here is the key to configure the flannel network, same network Configuration)
flannel_options= "--iface=eno16777736" (the Address of the actual physical network card filled in here, can be viewed with the IP a command, mine is Eno16777736)
Kubelet:
Edit/etc/kubernetes/kubelet
kubelet_address= "--address=0.0.0.0"
Kubelet_hostname= "--hostname-override=centos-master"
Kubelet_api_server= "--api-servers=http://centos-master:8080"
Kubelet_pod_infra_container= "--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
Config
Edit/etc/kubernetes/config
Kube_logtostderr= "--logtostderr=true"
kube_etcd_servers= "--etcd-servers=http://centos-master:2379"
Kube_log_level= "--v=0"
kube_allow_priv= "--allow-privileged=false"
Kube_master= "--master=http://centos-master:8080"
Depending on the components of the master, nodes installation that we mentioned above, modify the relevant configuration file on master and nodes and the config configuration file is best Modified.
Then we generate the above mentioned generate key and digital certificate, create the Security directory on the master node
Mkdir-p/root/security
Cd/root/security
The relevant certificate file, described as Follows:
File |
Role |
Ca.key |
The private key of the CA that you generate to impersonate a CA |
Ca.crt |
Self-signed CA certificate with own private key |
Server.key |
The private key of the API server that is used to configure HTTPS for the API server |
Server.csr |
The API Server's certificate request file, which is used to request the API Server's certificate |
Server.crt |
Certificate for API server issued with your own impersonated ca, used to configure HTTPS for API server |
OpenSSL genrsa-out Ca.key 2048
OpenSSL req-x509-new-nodes-key ca.key-subj "/cn=ph_ccnp"-days 10000-out ca.crt
OpenSSL genrsa-out Server.key 2048
echo subjectaltname=ip:10.254.0.1 > Extfile.cnf
# #10.254.0.1 is the Mater address under the Flanne network and can be viewed by command
# #kubectl Get services--all-namespaces |grep ' default ' |grep ' kubernetes ' |grep ' 443 ' |awk # # ' {print $} ' because the cluster has not yet started, So see this address, you can add a wrong first, then start the cluster and then modify
OpenSSL req-new-key server.key-subj "/cn=ph_ccnp"-out SERVER.CSR
OpenSSL x509-req-in server.csr-ca ca.crt-cakey ca.key-cacreateserial-extfile extfile.cnf-out server.crt-days 10000
3. Start the service
Master Node:
/usr/bin/kube-apiserver--logtostderr=true--v=0--etcd-servers=http://centos-master:2379--address=0.0.0.0--port= 8080--kubelet-port=10250--allow-privileged=true--service-cluster-ip-range=10.254.0.0/16--admission-control= serviceaccount--insecure-bind-address=0.0.0.0--client-ca-file=/root/security/ca.crt--tls-cert-file=/root/ Security/server.crt--tls-private-key-file=/root/security/server.key--basic-auth-file=/root/security/basic_ auth.csv--secure-port=443 &>>/var/log/kubernetes/kube-apiserver.log &
/usr/bin/kube-controller-manager--logtostderr=true--v=0--master=http://centos-master:8080--root-ca-file=/root/ Security/ca.crt--service-account-private-key-file=/root/security/server.key & >>/var/log/kubernetes/ Kube-controller-manage.log &
For SERVICES in Etcd kube-proxy kube-scheduler flanneld; Do
Systemctl Restart $SERVICES
Systemctl Enable $SERVICES
Systemctl Status $SERVICES
Done
Nodes Node:
For SERVICES in Kube-proxy kubelet Flanneld docker; Do
Systemctl Restart $SERVICES
Systemctl Enable $SERVICES
Systemctl Status $SERVICES
Done
When the service starts, enter Kubectl get nodes on Master
[[email protected] kubernetes]# Kubectl Get nodes NAME STATUS Age Centos-master Ready 5d Centos-minion-1 Ready 12d Centos-minion-2 Ready 12d |
Now the entire cluster has been installed successfully!!
4. Installing Dashboard
First of all, we want to download dashboard image, due to the presence of my greater China local area network, resulting in the official Google kubernetes-dashboard-amd64:v1.5.0 image can not download (pit dad!! ), we'll go to the Docker.io site first pull a v1.4.0 version of the image
Docker Pull docker.io/sailsxu/kubernetes-dashboard-amd64:v1.4.0
Then download the automatic deployment file for the website
Curl Https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml-o Kubernetes-dashborad.yaml
There are three changes to be modified Here:
1.image:docker.io/sailsxu/kubernetes-dashboard-amd64:v1.4.0
2.imagepullpolicy:always to cancel the trip, otherwise it will always go to register to take the mirror
3.#---apiserver-host=http://my-address:port, modified to---apiserver-host=http://centos-master:8080
Then we execute
Kubectl create-f Kubernetes-dashborad.yaml
Kubectl Get Po--namespace=kube-system
NAME Ready STATUS Restarts Kubernetes-dashboard-2963774231-uzsul 1/1 Running 1 15h |
Kubectl--namespace=kube-system Get po-o Wide
NAME ready STATUS Restarts IP NODE Kubernetes-dashboard-2963774231-uzsul 1/1 Running 15h 172.30.9.2 centos-minion-2 |
We can see now that dashboard is already running on the centos-minion-2 node and we are viewing it in the browser address bar
Https://192.168.128.160/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=_all
650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "style=" background:url ("/e/u261/lang/zh-cn/ Images/localimage.png ") no-repeat center;border:1px solid #ddd;" alt= "spacer.gif"/>
The dashboard has been installed Here.
This tutorial does not involve the basic part of k8s, and if there is time, I will write some knowledge of the basic part again! About the K8s UI how to use, I will be in the future tutorial in time to update!!
This article is from the "hankou people in hanyang" blog, please make sure to keep this source http://phccnp.blog.51cto.com/4352504/1890494
Install kubernetes pits Tutorial (original) under CENTOS7