Previous blog post have introduced the use of KUBEADM Automation to install kubernetes, but since each component is run as a container, there is not much involved in the specifics of the configuration, in order to better understand the role of each component in Kubernetes, This post will install the Kubernetes cluster using a binary approach, with detailed instructions for the configuration of each component.
In version 1.10, the connection to non-secure ports (default 8080) has been phased out, and the configuration process is somewhat complicated by the way the CA certificate authentication is used to establish the cluster.
Environment description
1, two CentOS7 host, resolve hostname, shutdown firewall, Selinux, synchronization system time:
10.0.0.1 node-1 Master
10.0.0.2 node-2 Node
deploy on Master:
- Etcd
- Kube-apiserver
- Kube-controller-manager
- Kube-scheduler
Node on deployment:
- Docker
- Kubelet
- Kube-proxy
2, download the official software package https://github.com/kubernetes/kubernetes/, here we download the binary file, here we chose the 1.10.2 version:
- Kubernetes-server-linux-amd64.tar.gz
- Kubernetes-node-linux-amd64.tar
Master Deployment
Since the binary package is used, the corresponding files are copied directly to the execution directory after decompression:
# tar xf kubernetes-server-linux-amd64.tar.gz# cd kubernetes/server/bin# cp `ls|egrep -v "*.tar|*_tag"` /usr/bin/
The following is a description of the specific service configuration.
1, ETCD
The ETCD service is the core database of the Kubernetes cluster and needs to be installed and started before each service is installed. The demonstration here is to deploy the ETCD single node, and of course you can configure a 3-node cluster. If you want to configure it more easily, it is recommended to install it directly using Yum .
# wget https://github.com/coreos/etcd/releases/download/v3.2.20/etcd-v3.2.20-linux-amd64.tar.gz# tar xf etcd-v3.2.20-linux-amd64.tar.gz# cd etcd-v3.2.20-linux-amd64# cp etcd etcdctl /usr/bin/# mkdir /var/lib/etcd# mkdir /etc/etcd
Edit SYSTEMD Admin File:
vim /usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.target[Service]Type=simpleWorkingDirectory=/var/lib/etcd/EnvironmentFile=-/etc/etcd/etcd.confExecStart=/usr/bin/etcd[Install]WantedBy=multi-user.target
Start the service:
systemctl daemon-reloadsystemctl start etcdsystemctl status etcd.service
View service Status:
[[email protected] ~]# netstat -lntp|grep etcdtcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 18794/etcd tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 18794/etcd [[email protected] ~]# etcdctl cluster-healthmember 8e9e05c52164694d is healthy: got healthy result from http://localhost:2379cluster is healthy
Description: ETCD will enable two ports, where 2379 is the communication port for the cluster and 2380 is the service port. If you are configuring a ETCD cluster, modify the configuration file to set the listening IP and port.
2, Kube-apiserver
1. Edit the Systemd startup file:
vim /usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://kubernetes.io/docs/concepts/overviewAfter=network.targetAfter=etcd.service[Service]EnvironmentFile=/etc/kubernetes/apiserverExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGSRestart=on-failureType=notifyLimitNOFILE=65536[Install]WantedBy=multi-user.target
2. Configuration parameter file (need to create configuration directory first):
# cat /etc/kubernetes/apiserver KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://127.0.0.1:2379 --bind-address=0.0.0.0 --secure-port=6443 --service-cluster-ip-range=10.222.0.0/16 --service-node-port-range=1-65535 --client-ca-file=/etc/kubernetes/ssl/ca.crt --tls-private-key-file=/etc/kubernetes/ssl/server.key --tls-cert-file=/etc/kubernetes/ssl/server.crt --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
- Service-cluster-ip-range is the IP range of the servcies virtual IP, which can be defined by itself and cannot overlap the current host network segment.
- Bind-addres the specified Apiserver listener address, the corresponding listening port is 6443, using the HTTPS method.
- Client-ca-file This is a certified file, which is pre-defined, and then the certificate file is created and placed in the corresponding path.
3, create the log directory and certificate directory, if you do not have a file directory also need to create:
mkdir /var/log/kubernetesmkdir /etc/kubernetesmkdir /etc/kubernetes/ssl
3, Kube-controller-manager
1. Configure the Systemd startup file:
# cat /usr/lib/systemd/system/kube-controller-manager.service [Unit]Description=Kubernetes Controller Manager Documentation=https://kubernetes.io/docs/setupAfter=kube-apiserver.serviceRequires=kube-apiserver.service[Service]EnvironmentFile=/etc/kubernetes/controller-managerExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target
2. Configure the startup parameters file:
# cat /etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS="--master=https://10.0.0.1:6443 --service-account-private-key-file=/etc/kubernetes/ssl/server.key --root-ca-file=/etc/kubernetes/ssl/ca.crt --kubeconfig=/etc/kubernetes/kubeconfig"
4, Kube-scheduler
1. Configure the Systemd startup file:
# cat /usr/lib/systemd/system/kube-scheduler.service [Unit]Description=Kubernetes Controller Manager Documentation=https://kubernetes.io/docs/setupAfter=kube-apiserver.serviceRequires=kube-apiserver.service[Service]EnvironmentFile=/etc/kubernetes/schedulerExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target
2. Configuration parameter file:
# cat /etc/kubernetes/scheduler KUBE_SCHEDULER_ARGS="--master=https://10.0.0.1:6443 --kubeconfig=/etc/kubernetes/kubeconfig"
5. Create a Kubeconfig file
# cat /etc/kubernetes/kubeconfig apiVersion: v1kind: Configusers:- name: controllermanager user: client-certificate: /etc/kubernetes/ssl/cs_client.crt client-key: /etc/kubernetes/ssl/cs_client.keyclusters:- name: local cluster: certificate-authority: /etc/kubernetes/ssl/ca.crtcontexts:- context: cluster: local user: controllermanager name: my-contextcurrent-context: my-context
6. Create a CA Certificate
1. Configure the CA certificate and private key file for Kube-apiserver:
# cd /etc/kubernetes/ssl/# openssl genrsa -out ca.key 2048# openssl req -x509 -new -nodes -key ca.key -subj "/CN=10.0.0.1" -days 5000 -out ca.crt # CN指定Master的IP地址# openssl genrsa -out server.key 2048
2. Create master_ssl.cnf file:
# cat master_ssl.cnf [req]req_extensions = v3_reqdistinguished_name = req_distinguished_name[req_distinguished_name][ v3_req ]basicConstraints = CA:FALSEkeyUsage = nonRepudiation, digitalSignature, keyEnciphermentsubjectAltName = @alt_names[alt_names]DNS.1 = kubernetesDNS.2 = kubernetes.defaultDNS.3 = kubernetes.default.svcDNS.4 = kubernetes.default.svc.cluster.localDNS.5 = k8s_masterIP.1 = 10.222.0.1 # ClusterIP 地址IP.2 = 10.0.0.1 # master IP地址
3, based on the above file, create SERVER.CSR and SERVER.CRT files, execute the following command:
# openssl req -new -key server.key -subj "/CN=node-1" -config master_ssl.cnf -out server.csr # CN指定主机名# openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile master_ssl.cnf -out server.crt
Tip: After executing the above command, 6 files will be generated, CA.CRT ca.key ca.srl server.crt SERVER.CSR server.key.
4, set up Kube-controller-manager related certificates:
# cd /etc/kubernetes/ssl/# openssl genrsa -out cs_client.key 2048# openssl req -new -key cs_client.key -subj "/CN=node-1" -out cs_client.csr # CN指定主机名# openssl x509 -req -in cs_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out cs_client.crt -days 5000
5, ensure that the/etc/kubernetes/ssl/directory has the following documents:
[[email protected] ssl]# lltotal 36-rw-r--r-- 1 root root 1090 May 25 15:34 ca.crt-rw-r--r-- 1 root root 1675 May 25 15:33 ca.key-rw-r--r-- 1 root root 17 May 25 15:41 ca.srl-rw-r--r-- 1 root root 973 May 25 15:41 cs_client.crt-rw-r--r-- 1 root root 887 May 25 15:41 cs_client.csr-rw-r--r-- 1 root root 1675 May 25 15:40 cs_client.key-rw-r--r-- 1 root root 1192 May 25 15:37 server.crt-rw-r--r-- 1 root root 1123 May 25 15:36 server.csr-rw-r--r-- 1 root root 1675 May 25 15:34 server.key
7. Start the service:
1. Start Kube-apiserver:
# systemctl daemon-reload# systemctl enable kube-apiserver# systemctl start kube-apiserver
Note: Kube-apiserver starts with two ports (8080 and 6443) by default, where 8080 is the port of communication between components and is rarely used in the new version, and the host kube-apiserver is generally called Master. Another port 6443 is the port that provides authentication and authorization for HTTPS.
2. Start Kube-controller-manager:
# systemctl daemon-reload# systemctl enable kube-controller-manager# systemctl start kube-controller-manager
Description: This service will start a port of 10252
3. Start Kube-scheduler
# systemctl daemon-reload# systemctl enable kube-scheduler# systemctl start kube-scheduler
Description: This service will start a port of 10251
5, start each service, check the corresponding log and boot status information, confirm the service did not error
# systemctl status KUBE-SERVEICE-NAME
Node deployment
The services deployed on node nodes are simple enough to deploy Docker, Kubelet, and kube-proxy services.
First configure the following files:
# cat /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1
Upload the kubernetes node binary package, and then execute the following command:
tar xf kubernetes-node-linux-amd64.tar.gz cd /kubernetes/node/bincp kubectl kubelet kube-proxy /usr/bin/mkdir /var/lib/kubeletmkdir /var/log/kubernetesmkdir /etc/kubernetes
1. Docker
1. Install Docker17.03 version:
yum install docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm -yyum install docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm -y
2. Configure the Startup parameters:
vim /usr/lib/systemd/system/docker.service...ExecStart=/usr/bin/dockerd --registry-mirror https://qxx96o44.mirror.aliyuncs.com...
3. Start:
systemctl daemon-reloadsystemctl enable dockersystemctl start docker
2. Create Kubelet Certificate
You need to configure Kubelet client certificates on each node.
Copy the Ca.crt,ca.key on the master to the SSL directory on node, and execute the following command to generate the KUBELET_CLIENT.CRT and KUBELET_CLIENT.CSR files:
# cd /etc/kubernetes/ssl/# openssl genrsa -out kubelet_client.key 2048# openssl req -new -key kubelet_client.key -subj "/CN=10.0.0.2" -out kubelet_client.csr # CN指定Node节点的IP# openssl x509 -req -in kubelet_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000
3, Kubelet
1. Configure the Startup file:
# cat /usr/lib/systemd/system/kubelet.service [Unit]Description=Kubernetes API ServerDocumentation=https://kubernetes.io/docAfter=docker.serviceRequires=docker.service[Service]WorkingDirectory=/var/lib/kubeletExecStart=/usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubeconfig.yaml --logtostderr=false --log-dir=/var/log/kubernetes --v=2Restart=on-failure[Install]WantedBy=multi-user.target
2. Configuration file:
# cat /etc/kubernetes/kubeconfig.yaml apiVersion: v1kind: Configusers:- name: kubelet user: client-certificate: /etc/kubernetes/ssl/kubelet_client.crt client-key: /etc/kubernetes/ssl/kubelet_client.keyclusters:- name: local cluster: certificate-authority: /etc/kubernetes/ssl/ca.crt server: https://10.0.0.1:6443contexts:- context: cluster: local user: kubelet name: my-contextcurrent-context: my-context
3. Start the service:
# systemctl daemon-reload# systemctl start kubelet# systemctl enable kubelet
4. Verify on Master:
[[email protected] ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONnode-2 Ready <none> 36m v1.10.2
Description: Kubelet acts as an agent and installs Kubelet to view node information on master. Kubelet configuration file is a Yaml format file, the designation of master needs to be described in the configuration file. The default listener is 10248, 10250, 10255, and 4194 ports.
4, Kube-proxy
1. Create Systemd Startup file:
# cat /usr/lib/systemd/system/kube-proxy.service [Unit]Description=Kubernetes kubelet agent Documentation=https://kubernetes.io/docAfter=network.serviceRequires=network.service[Service]EnvironmentFile=/etc/kubernetes/proxyExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS Restart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target
2. Create the parameter file:
# cat /etc/kubernetes/proxy KUBE_PROXY_ARGS="--master=https://10.0.0.1:6443 --kubeconfig=/etc/kubernetes/kubeconfig.yaml"
3. Start the service:
# systemctl daemon-reload# systemctl start kube-proxy# systemctl enable kube-proxy
Description: The default listener 10249,10256 after starting the service.
Create an App
Once the above deployment is complete, you can create an app, but before you start, you must have a pause mirror on each node, otherwise the creation will not succeed because you cannot access the Google mirror.
Resolve the mirroring problem by executing the following command on the node nodes:
docker pull mirrorgooglecontainers/pause-amd64:3.1docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
Here's a simple application to verify that our cluster is working properly.
Create an Nginx application
1. Edit the Nginx.yaml file:
apiVersion: v1kind: ReplicationControllermetadata: name: mywebspec: replicas: 2 selector: app: myweb template: metadata: labels: app: myweb spec: containers: - name: myweb image: nginx ports: - containerPort: 80
2. Execution:
# kubectl create -f nginx.yaml
3. View Status:
[[email protected] ~]# kubectl get rcNAME DESIRED CURRENT READY AGEmyweb 2 2 2 3h[[email protected] ~]# kubectl get podsNAME READY STATUS RESTARTS AGEmyweb-qtgrv 1/1 Running 0 1hmyweb-z9d2c 1/1 Running 0 1h[[email protected] ~]# docker ps|grep nginx067db96d0c97 [email protected]:0fb320e2a1b1620b4905facb3447e3d84ad36da0b2c8aa8fe3a5a81d1187b884 "nginx -g ‘daemon ..." About an hour ago Up About an hour k8s_myweb_myweb-qtgrv_default_3213ec67-5fef-11e8-9e43-000c295f81fb_0dd8f7458e410 [email protected]:0fb320e2a1b1620b4905facb3447e3d84ad36da0b2c8aa8fe3a5a81d1187b884 "nginx -g ‘daemon ..." About an hour ago Up About an hour k8s_myweb_myweb-z9d2c_default_3214600e-5fef-11e8-9e43-000c295f81fb_0
4. Create a service that maps to the local port:
# cat nginx-service.yaml apiVersion: v1kind: Servicemetadata: name: mywebspec: type: NodePort # 定义外网访问模式 ports: - port: 80 nodePort: 30001 # 外网访问的端口,映射的本地宿主机端口 selector: app: myweb# 创建service# kubectl create -f nginx-service.yaml# 验证:[[email protected] ~]# kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.222.0.1 <none> 443/TCP 1dmyweb NodePort 10.222.35.97 <none> 80:30001/TCP 1h
5. A 30001 port will be mapped on all nodes where the proxy service is installed, and access to this port will allow access to the default start page of Nginx.
# netstat -lntp|grep 30001tcp6 0 0 :::30001 :::* LISTEN 7713/kube-proxy
kubernetes1.10--Binary Cluster deployment