As we all know, kubernetes (k8s) is used for the management of Docker cluster, the recent period of time has been tossing the environment, here to write a blog, to help like me, small white, avoid detours.
First, the environment
Cluster environment
Role |
IP Address |
Version number |
Docker version |
System version |
Master |
192.63.63.1/24 |
v1.9.1 |
17.12.0-ce |
Centos7.1 |
Node1 |
192.63.63.10/24 |
v1.9.1 |
17.12.0-ce |
Centos7.1 |
Node2 |
192.63.63.20/24 |
v1.9.1 |
17.12.0-ce |
Centos7.1 |
Master Node required component
Component Name |
Role |
Version number |
Etcd |
Non-relational database |
v1.9.1 |
Kube-apiserver |
Core components, all components communicate with each other, providing HTTP restful interface |
v1.9.1 |
Kube-controller-manager |
Cluster Internal Management Center, responsible for all kinds of resource management, such as rc,pod, namespaces, etc. |
v1.9.1 |
Kube-scheduler |
Scheduling component, responsible for node scheduling |
v1.9.1 |
Node node required component
Component Name |
Role |
Version number |
Kubelet |
Node nodes core components, responsible for the implementation of master issued tasks |
v1.9.1 |
Kube-proxy |
Agent, responsible for Kubelet and Apiserver network. Equivalent to load balancing, transferring requests to back-end pod |
v1.9.1 |
Second, the installation
See the Kubernetes Authority Guide installed by Yum install. The current (2018-02-27) version installed via Yum is 1.5.2, and the latest version is 1.9.1. The difference between two versions is still relatively large, major differences kubelet the Api-server parameter is not supported in the configuration file.
Although Yum installed is not the latest version, but we can still learn some content, such as Systemd script service, k8s each configuration file.
2.0 Installation Etcd
The ETCD is a database that is used to store k8s related data. ETCD does not belong to the k8s component, so it needs to be installed individually, easy to install, and can be installed by Yum. The latest version of the current Yum installation is 3.2.11.
[Root@localhost ~]# Yum Install Etcd
2.1 Download and install
Download the latest version of the address, just download the server Binaries. This is also included in this server binaries because of the components that are required for node nodes. After downloading, unzip and copy the executable file to the system directory
[Root@localhost packet]#
[root@localhost packet]# tar-zxf kubernetes-server-linux-amd64.tar.gz
[ Root@localhost packet]# ls
kubernetes kubernetes-server-linux-amd64.tar.gz opensrc
[root@localhost packet]#
[root@localhost packet]# cd kubernetes/server/bin
[root@localhost bin]# CP Apiextensions-apiserver Cloud-controller-manager hyperkube kubeadm kube-aggregator kube-apiserver Kube-controller-manager Kubelet kube-proxy kube-scheduler mounter/usr/bin
[root@localhost bin]#
2.2 Configuring SYSTEMD Services
The following files are from the kubernetes1.5.2 RPM package and are stored in a directory of/usr/lib/systemd/system
[Root@localhost system]# cat Kube-apiserver.service [unit] description=kubernetes API Server documentation=https:// Github.com/googlecloudplatform/kubernetes after=network.target after=etcd.service [service] EnvironmentFile=-/etc/ Kubernetes/config environmentfile=-/etc/kubernetes/apiserver user=kube execstart=/usr/bin/kube-apiserver \ $KU
Be_logtostderr \ $KUBE _log_level \ $KUBE _etcd_servers \ $KUBE _api_address \
$KUBE _api_port \ $KUBELET _port \ $KUBE _allow_priv \ $KUBE _service_addresses \ $KUBE _admission_control \ $KUBE _api_args restart=on-failure type=notify limitnofile=65536 [Install] Wan
Tedby=multi-user.target [Root@localhost system]# [root@localhost system]# cat kube-controller-manager.service [unit] Description=kubernetes Controller Manager documentation=https://github.com/googlecloudplatform/kubernetes [Service ] Environmentfile=-/etc/kubernetes/config EnviROnmentfile=-/etc/kubernetes/controller-manager user=kube execstart=/usr/bin/kube-controller-manager \ $KUBE _lo Gtostderr \ $KUBE _log_level \ $KUBE _master \ $KUBE _controller_manager_args restart=on- Failure limitnofile=65536 [Install] wantedby=multi-user.target [root@localhost system]# [root@localhost system]#] [ Root@localhost system]# cat Kubelet.service [unit] description=kubernetes Kubelet Server documentation=https:// Github.com/googlecloudplatform/kubernetes after=docker.service requires=docker.service [service] WorkingDirectory= /var/lib/kubelet environmentfile=-/etc/kubernetes/config Environmentfile=-/etc/kubernetes/kubelet ExecStart=/usr/
Bin/kubelet \ $KUBE _logtostderr \ $KUBE _log_level \ $KUBELET _api_server \
$KUBELET _address \ $KUBELET _port \ $KUBELET _hostname \ $KUBE _allow_priv \ $KUBELET _args restart=on-failure killmode=process [InstALL] wantedby=multi-user.target [root@localhost system]# [root@localhost system]# [Root@localhost system]# Cat Kube-proxy.service [unit] description=kubernetes kube-proxy Server documentation=https://github.com/
Googlecloudplatform/kubernetes After=network.target [Service] Environmentfile=-/etc/kubernetes/config Environmentfile=-/etc/kubernetes/proxy execstart=/usr/bin/kube-proxy \ $KUBE _logtostderr \ $KUBE _l Og_level \ $KUBE _master \ $KUBE _proxy_args restart=on-failure limitnofile=65536 [Install] Wanted By=multi-user.target [Root@localhost system]# [root@localhost system]# [Root@localhost system]# Cat Kube-scheduler.service [Unit] description=kubernetes Scheduler Plugin documentation=https://github.com/ googlecloudplatform/kubernetes [Service] Environmentfile=-/etc/kubernetes/config environmentfile=-/etc/kubernetes
/scheduler user=kube execstart=/usr/bin/kube-scheduler \ $KUBE _logtostderr \ $KUBE _log_level \ $KUBE _master \ $KUBE _scheduler_args restart=on-failure limitnofile=65536 [Install] Wantedby=multi-use R.target [Root@localhost system]#
2.3 Configuration K8s
SYSTEMD service profile to create a/etc/kubernetes directory and related files
[root@localhost kubernetes]# ls
apiserver config controller-manager kubelet proxy Scheduler
[root@localhost kubernetes]#
[root@localhost kubernetes]# cat config
###
# kubernetes System Config
#
The following values are used to configure various aspects
of all # Kubernetes services, Including # #
kube-apiserver.service # kube-controller-manager.service # Kube-scheduler.service
# Kubelet.service # kube-proxy.service # logging to
stderr means we Get it in the SYSTEMD journal
kube_logtostderr= "--logtostderr=true"
# Journal message level, 0 is debug
KU Be_log_level= "--v=0"
# Should This cluster is allowed to run privileged Docker containers
"-- Allow-privileged=false "
# How the Controller-manager, scheduler, and proxy find the Apiserver
kube_master="-- master=http://127.0.0.1:8080 "
[root@localhost kubernetes]#
Apiserver needs to modify the--insecure-bind-address address to 0.0.0.0 (modify to a large network IP address) and receive a connection to any address.
[root@localhost kubernetes]# cat apiserver ### # kubernetes System Config # The following values are U
Sed to configure the Kube-apiserver # "address" on the local server to listen to.
Kube_api_address= "--insecure-bind-address=0.0.0.0" # The port on the local server to listen on. # kube_api_port= "--port=8080" # PORT Minions listen on # kubelet_port= "--kubelet-port=10250" # Comma separated list of n Odes in the ETCD cluster kube_etcd_servers= '--etcd-servers=http://127.0.0.1:2379 ' # address range to use for services KUB E_service_addresses= "--SERVICE-CLUSTER-IP-RANGE=10.254.0.0/16" # Default Admission Control policies #KUBE_ADMISSION_ Control= "--admission-control=namespacelifecycle,namespaceexists,limitranger,securitycontextdeny,serviceaccount , Resourcequota "kube_admission_control="--admission-control=namespacelifecycle,namespaceexists,limitranger,
Securitycontextdeny,resourcequota "# Add your own! Kube_api_args= "" [Root@localhost kubernetes]#
Kubelet configuration file, the most important configuration is to execute Apiserver address, but after v1.8 version no longer support--api-servers, so need to comment out, then the question, kubelet how to specify the Api-server address.
[Root@localhost kubernetes]# cat Kubelet
###
# kubernetes Kubelet (Minion) config # The address for the
info s Erver to serve on (set to 0.0.0.0 or "for all interfaces)
kubelet_address="--address=127.0.0.1 "
# the port for T He info server to serve on
kubelet_port= "--port=10250" # Your may leave this blank to use the
actual hostname
K Ubelet_hostname= "--hostname-override=127.0.0.1"
# Location of the api-server
# #KUBELET_API_SERVER = "- api-servers=http://127.0.0.1:8080 "
# Pod infrastructure container
kubelet_pod_infra_container=" Pod-infra-container-image=docker.io/kubernetes/pause "
# Add your own!
kubelet_args= "--fail-swap-on=false--cgroup-driver=cgroupfs--kubeconfig=/var/lib/kubelet/kubeconfig"
The following configuration files are basically empty and have little content.
[Root@localhost kubernetes]# cat Controller-manager
###
# The following values are used to configure the Kubernete s Controller-manager
# defaults from config and apiserver should is adequate
# ADD your own!
Kube_controller_manager_args= ""
[root@localhost kubernetes]#
[root@localhost kubernetes]# cat Proxy
# # #
# kubernetes proxy config
# Default Config should be adequate
# ADD your own!
Kube_proxy_args= ""
[root@localhost kubernetes]#
[root@localhost kubernetes]# cat Scheduler
# # #
# kubernetes Scheduler config
# Default Config should be adequate
# ADD your own!
Kube_scheduler_args= "--loglevel=0"
All of the above configurations are configured in the master node, so that k8s is a single node cluster--both running master and node environment. But node still has questions, as described below.
Third, HTTP mode
As described earlier, after the v1.8 version Kubelet no longer supports api-server parameters, how can you communicate with Api-server in the new version Kubelet? is through the Kubeconfig parameter, specifies the configuration file (this place is a big pit, hang me for a long time).
There is a configuration item in the/etc/kubernetes/kubelet configuration file.
kubelet_args= "--fail-swap-on=false--cgroup-driver=cgroupfs--kubeconfig=/var/lib/kubelet/kubeconfig"
Specify the directory in which Kubeconfig is located, as follows:
[Root@localhost kubernetes]#
[root@localhost kubernetes]# cat/var/lib/kubelet/kubeconfig
apiversion:v1
clusters:
-cluster:
server:http://127.0.0.1:8080
name:myk8s
Contexts:
-Context:
cluster:myk8s
User: ""
name:myk8s-context
current-context:myk8s-context
kind:config
Preferences: {}
users: []
[Root@localhost kubernetes]#
Explain what's above:
1) Clusters-represents the cluster and supports multiple clusters. Inside, you need to make the server, which is the address of api-server. HTTPS is also supported here, which is described in detail later.
2) Contexts-cluster context, supporting multiple contexts
3) Current-context-Represents the currently used context
Other fields are described when you describe the HTTPS method.
At this point, the single node cluster deployment is complete and we need to start each service.
[Root@localhost k8s]# systemctl start Docker
[root@localhost k8s]# systemctl start Etcd
[Root@localhost k8s]# Systemctl start Kube-apiserver
[root@localhost k8s]# systemctl start Kube-controller-manager
[root@localhost k8s]# systemctl start Kube-scheduler
[root@localhost k8s]# systemctl start Kubelet
[Root@localhost k8s]# Systemctl start Kube-proxy
[Root@localhost k8s]#
Verify that the environment is normal:
[Root@localhost k8s]#
[root@localhost k8s]# kubectl get nodes
NAME STATUS ROLES Age VERSION
127.0.0.1 Ready <none> 16d v1.9.1
[root@localhost k8s]#
[ Root@localhost k8s]#
The above is only configured on the master node, and we configure HTTP to access Api-server in Node1.
First you need to copy the Kubelet, Kube-proxy process, and the first off configuration file into the Node1, and then copy the executable and the configuration file to the corresponding directory:
[Root@node1 k8s_node]#
[root@node1 k8s_node]# ls bin-file/config-file/
bin-file/:
kubelet Kube-proxy
config-file/:
config kubeconfig kubelet kubelet.service kube-proxy.service Proxy
[Root@node1 k8s_node]#
[root@node1 k8s_node]# [
root@node1 k8s_node]# mv Bin-file/kubelet bin-file/ Kube-proxy/usr/bin
[root@node1 k8s_node]# mkdir/etc/kubernetes
[root@node1 k8s_node]# MV config-file/ Config config-file/kubelet config-file/proxy/etc/kubernetes/
[root@node1 k8s_node]# MV config-file/ Kubelet.service Config-file/kube-proxy.service/usr/lib/systemd/system
[root@node1 k8s_node]#
[ Root@node1 k8s_node]# mkdir/var/lib/kubelet
[root@node1 k8s_node]# MV config-file/kubeconfig/var/lib/kubelet/< C20/>[root@node1 k8s_node]#
Focus:
1 Modify the IP address of server in/var/lib/kubelet/kubeconfig file, modify to http://192.63.63.1:8080
2) Modify the/etc/kubernetes/kubelet file Kubelet_hostname to modify it to "--hostname-override=node1"
Start the service Docker, Kubelet, Kube-proxy separately, and then verify in master:
[Root@localhost ~]#
[root@localhost ~]# kubectl get nodes
NAME STATUS ROLES age VERSION
127.0.0.1 Ready <none> 17d v1.9.1
node1 Ready <none> 5m v1.9.1
[ Root@localhost ~]#
When name is Node1 and status is ready, deployment is successful.
At this point, the k8s is deployed and the HTTP approach is complete, and the next article describes the HTTPS approach.