Deploy a kubernetes cluster on CentOS7.0
I. Deployment environment and architecture
Role
Hostname
IP Address
Master
Master
10.0.222.2
Node
Node1
10.0.222.3
Node
Node2
10.0.222.4
Node
Node3
10.0.222.5
Node
Node4
10.0.222.6
The master node contains four components: kube-apiserver kube-scheduler kube-controller-manager etcd.
Node contains two components: kube-proxy and kubelet.
1. kube-apiserver: Located on the master node, which accepts user requests.
2. kube-scheduler: Located on the master node, responsible for resource scheduling, that is, the node on which the pod is created.
3. kube-controller-manager: Located on the master node, including ReplicationManager, Endpointscontroller, Namespacecontroller, and Nodecontroller.
4. etcd: Distributed Key-value storage system that shares the resource object information of the entire cluster.
5. kubelet: it is located on a node and is responsible for maintaining the pods running on a specific host.
6. kube-proxy: located on a node. It acts as a Service proxy.
Ii. Installation Steps
Preparations
Disable Firewall
To avoid conflicts with Docker's iptables, We need to disable the firewall on the node:
12
$ systemctl stop firewalld$ systemctl disable firewalld
Install NTP
To keep the time of each server consistent, you also need to install NTP for all servers:
123
$ yum -y install ntp$ systemctl start ntpd$ systemctl enable ntpd
Deploy the Master to install etcd and kubernetes
1
$ yum -y install etcd kubernetes
Configure etcd
Modify the configuration file of etcd/etc/etcd/etcd.conf
:
123
ETCD_NAME=defaultETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
Configure the network in etcd
Define the network configuration in etcd. The flannel service in nodeN pulls this configuration.
1
$ etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'
Configure the Kubernetes API server
1234567
API_ADDRESS="--address=0.0.0.0"KUBE_API_PORT="--port=8080"KUBELET_PORT="--kubelet_port=10250"KUBE_ETCD_SERVERS="--etcd_servers=http://10.0.222.2:2379"KUBE_SERVICE_ADDRESSES="--portal_net=10.254.0.0/16"KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"KUBE_API_ARGS=""
Note thatKUBE_ADMISSION_CONTROL
Default includeServiceAccount
Delete it. Otherwise, an error is reported when the API server is started.
Start the service
Next, start the following services on the Master:
12345
$for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICESsystemctl enable $SERVICESsystemctl status $SERVICES done
Deploy Node to install Kubernetes and Flannel
1
$ yum -y install flannel kubernetes
Configure Flannel
Modify the configuration file of Flannel/etc/sysconfig/flanneld
:
123
FLANNEL_ETCD="http://10.0.222.2:2379"FLANNEL_ETCD_KEY="/coreos.com/network"FLANNEL_OPTIONS="--iface=ens3"
Note thatFLANNEL_OPTIONS
The value of iface in is the network card of your own server. Different servers and configurations are different from mine.
Start Flannel
123
$systemctl restart flanneld$systemctl enable flanneld$systemctl status flanneld
Upload Network Configuration
Createconfig.json
, The content is as follows:
12345678
{"Network": "172.17.0.0/16","SubnetLen": 24,"Backend": { "Type": "vxlan", "VNI": 7890 } }
Then upload the configuration to the etcd Server:
1
$ curl -L http://10.0.222.2:2379/v2/keys/coreos.com/network/config -XPUT --data-urlencode value@config.json
Modify Kubernetes configurations
Modify the default configuration file of kubernetes/etc/kubernetes/config
:
1
KUBE_MASTER="--master=http://10.0.222.2:8080"
Modify kubelet configurations
Modify the configuration file of the kubelet Service/etc/kubernetes/kubelet
:
123456
KUBELET_ADDRESS="--address=0.0.0.0"KUBELET_PORT="--port=10250"# change the hostname to minion IP addressKUBELET_HOSTNAME="--hostname_override=node1"KUBELET_API_SERVER="--api_servers=http://10.0.222.2:8080"KUBELET_ARGS=""
For different node nodes, you only need to change KUBELET_HOSTNAME to the hostname of the node.
Start node Service
12345
$ for SERVICES in kube-proxy kubelet docker; do systemctl restart $SERVICESsystemctl enable $SERVICESsystemctl status $SERVICES done
Create a snapshot and install it on other nodes (modify the corresponding hostname and KUBELET_HOSTNAME) to view the cluster nodes.
After the deployment is complete, you can run the kubectl command to view the status of the entire cluster:
1 kubectl-s "http: // 10.0.222.2: 8080" get nodes