CENTOS7 Deploying Kubernetes Clusters

Source: Internet
Author: User
Tags etcd k8s

1, Environment Introduction and preparation: 1.1 Physical machine Operating system

The physical machine operating system uses Centos7.3 64-bit, the details are as follows.

[Email protected] ~]# uname-alinux localhost.localdomain 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 13:06:36 UTC x 86_64 x86_64 x86_64 gnu/linux[[email protected] ~]# cat/etc/redhat-release CentOS Linux release 7.3.1611 (Core)
1.2 Host information

This article prepares three machines for the deployment of the K8S operating environment, the details are as follows:

Nodes and functions

Host Name

Ip

Master, Etcd, registry

K8s-master

10.0.251.148

Node1

K8s-node-1

10.0.251.153

Node2

K8s-node-2

10.0.251.155

Set the host name of three machines:

Execute on Master:

[Email protected] ~]#  hostnamectl--static set-hostname  k8s-master

Execute on Node1:

[Email protected] ~]# hostnamectl--static set-hostname  k8s-node-1

Execute on Node2:

[Email protected] ~]# hostnamectl--static set-hostname  k8s-node-2

To set up hosts on three machines, execute the following command:

Echo ' 10.0.251.148    k8s-master10.0.251.148   etcd10.0.251.148   registry10.0.251.153   k8s-node-110.0.251.155    k8s-node-2 ' >>/etc/hosts
1.3 Shutting down the firewall on three machines
Systemctl Disable Firewalld.servicesystemctl Stop Firewalld.service
2. Deploy ETCD

K8s run dependent ETCD and need to deploy ETCD first, this article is installed in Yum mode:

[email protected] ~]# Yum install etcd-y

Yum installs the ETCD default configuration file in/etc/etcd/etcd.conf. Edit the configuration file and change the following color section information:

[Email protected] ~]# vi/etc/etcd/etcd.conf# [member]etcd_name=masteretcd_data_dir= "/var/lib/etcd/default.etcd" # Etcd_wal_dir= "" #ETCD_SNAPSHOT_COUNT = "10000" #ETCD_HEARTBEAT_INTERVAL = "#ETCD_ELECTION_TIMEOUT" = "#ETCD_" listen_peer_urls= "http://0.0.0.0:2380" etcd_listen_client_urls= "http://0.0.0.0:2379,http://0.0.0.0:4001" #ETCD_ max_snapshots= "5" #ETCD_MAX_WALS = "5" #ETCD_CORS = "##[cluster] #ETCD_INITIAL_ADVERTISE_PEER_URLS =" http://localhost : 2380 "# If you use different etcd_name (e.g. test), set Etcd_initial_cluster value for this NAME, i.e." test=http://... "#E Tcd_initial_cluster= "default=http://localhost:2380" #ETCD_INITIAL_CLUSTER_STATE = "new" #ETCD_INITIAL_CLUSTER_ Token= "Etcd-cluster" etcd_advertise_client_urls= "http://etcd:2379,http://etcd:4001" #ETCD_DISCOVERY = "" #ETCD_ discovery_srv= "" #ETCD_DISCOVERY_FALLBACK = "proxy" #ETCD_DISCOVERY_PROXY = ""

Start and verify Status

[[email protected] ~]# systemctl start etcd[[email protected] ~]#  etcdctl set testdir/testkey0 00[[email protected] ~] #  Etcdctl get testdir/testkey0 0[[email protected] ~]# etcdctl-c http://etcd:4001 cluster-healthmember 8E9E05C52164694D is Healthy:got healthy result from Http://0.0.0.0:2379cluster is healthy[[email protected] ~]# Etcdctl- C http://etcd:2379 cluster-healthmember 8e9e05c52164694d is healthy:got healthy result from Http://0.0.0.0:2379cluster I S healthy

Extension: ETCD cluster deployment See--http://www.cnblogs.com/zhenyuyaodidiao/p/6237019.html

3. Deploy master3.1 to install Docker
[email protected] ~]# Yum install Docker

Configure the Docker configuration file to allow mirroring to be pulled from the registry.

[[email protected] ~]# vim/etc/sysconfig/docker#/etc/sysconfig/docker# Modify These options if you want to change the WA Y the Docker daemon runsoptions= '--selinux-enabled--log-driver=journald--signature-verification=false ' If [-Z ' ${ Docker_cert_path} "]; Then    docker_cert_path=/etc/dockerfioptions= '--insecure-registry registry:5000 '

Set up boot and turn on services

[[email protected] ~]# chkconfig Docker on[[email protected] ~]# service Docker start
3.2 Installing Kubernets
[email protected] ~]# Yum install kubernetes

3.3 Configuring and starting Kubernetes

The following components need to be run on Kubernetes master:

Kubernets API Server

Kubernets Controller Manager

Kubernets Scheduler

To change the color section of the following configuration, the corresponding information:

3.3.1/etc/kubernetes/apiserver
[Email protected] ~]# vim/etc/kubernetes/apiserver#### kubernetes system config## The following values is used to confi Gure the kube-apiserver## the address on the local server to listen. Kube_api_address= "--insecure-bind-address=0.0.0.0" # The port on the local server to listen on. Kube_api_port= "--port=8080" # PORT Minions Listen on# kubelet_port= "--kubelet-port=10250" # Comma separated list of nodes I n the Etcd clusterkube_etcd_servers= "--etcd-servers=http://etcd:2379" # Address range to use for Serviceskube_service_ Addresses= "--SERVICE-CLUSTER-IP-RANGE=10.254.0.0/16" # Default Admission Control policies#kube_admission_control= "- -admission-control=namespacelifecycle,namespaceexists,limitranger,securitycontextdeny,serviceaccount, Resourcequota "Kube_admission_control="--admission-control=namespacelifecycle,namespaceexists,limitranger, Securitycontextdeny,resourcequota "# Add your own! Kube_api_args= ""
3.3.2/etc/kubernetes/config
[Email protected] ~]# vim/etc/kubernetes/config#### kubernetes system config## The following values is used to Configur E Various aspects of all# kubernetes Services, including##   kube-apiserver.service#   kube-controller-manager.service#   kube-scheduler.service#   kubelet.service#   kube-proxy.service#  Logging to stderr means we get it in the Systemd journalkube_logtostderr= "--logtostderr=true" # Journal message level, 0 is Debugkube_log_level= "--v=0" # should this cluster is allowed to run privileged Docker containerskube_allow_priv= "--allow- Privileged=false "# How the Controller-manager, scheduler, and proxy find the apiserverkube_master="--master=http:// k8s-master:8080 "

Start the service and set up boot from

[[email protected] ~]# Systemctl enable Kube-apiserver.service[[email protected] ~]# systemctl start Kube-apiserver.service[[email protected] ~]# Systemctl enable Kube-controller-manager.service[[email protected] ~]# Systemctl start Kube-controller-manager.service[[email protected] ~]# Systemctl enable Kube-scheduler.service[[email Protected] ~]# systemctl start Kube-scheduler.service
4. Deploy node4.1 to install Docker

See 3.1

4.2 Installing Kubernets

See 3.2

4.3 Configuring and starting Kubernetes

The following components need to be run on kubernetes node:

Kubelet

Kubernets Proxy

To change the color section of the following configuration text, the corresponding information:

4.3.1/etc/kubernetes/config
[Email protected] ~]# vim/etc/kubernetes/config#### kubernetes system config## The following values is used to Configur E Various aspects of all# kubernetes Services, including##   kube-apiserver.service#   kube-controller-manager.service#   kube-scheduler.service#   kubelet.service#   kube-proxy.service#  Logging to stderr means we get it in the Systemd journalkube_logtostderr= "--logtostderr=true" # Journal message level, 0 is Debugkube_log_level= "--v=0" # should this cluster is allowed to run privileged Docker containerskube_allow_priv= "--allow- Privileged=false "# How the Controller-manager, scheduler, and proxy find the apiserverkube_master="--master=http:// k8s-master:8080 "
4.3.2/etc/kubernetes/kubelet
[Email protected] ~]# vim/etc/kubernetes/kubelet#### kubernetes kubelet (Minion) config# the address for the info server  To serve in (set to 0.0.0.0 or ' for all interfaces) kubelet_address= '--address=0.0.0.0 ' # The port for the info server to Serve on# kubelet_port= "--port=10250" # You could leave this blank to use the actual hostnamekubelet_hostname= "--hostname-ov Erride=k8s-node-1 "# Location of the api-serverkubelet_api_server="--api-servers=http://k8s-master:8080 "# pod Infrastructure containerkubelet_pod_infra_container= "--pod-infra-container-image=registry.access.redhat.com/ Rhel7/pod-infrastructure:latest "# Add your own! Kubelet_args= ""

Start the service and set up boot from

[[email protected] ~]# Systemctl enable Kubelet.service[[email protected] ~]# systemctl start Kubelet.service[[email prot Ected] ~]# Systemctl enable Kube-proxy.service[[email protected] ~]# systemctl start Kube-proxy.service
4.4 Viewing status

View node and node status in the cluster on master

[[email protected] ~]#  kubectl-s http://k8s-master:8080 get NodeNAME         STATUS    agek8s-node-1   Ready     3mk8s-node-2   Ready     16s[[email protected] ~]# kubectl get nodesname         STATUS    agek8s-node-1   ready     3mk8s-node-2   Ready     43s

At this point, a kubernetes cluster has been set up, but the cluster is still not working well, please continue with the following steps.

5. Create an overlay network--flannel5.1 installation Flannel

Execute the following command on master and node to install

[email protected] ~]# Yum Install flannel

Version is 0.0.5

5.2 Configuring Flannel

Edit/etc/sysconfig/flanneld on Master, node, modify red section

[Email protected] ~]# vi/etc/sysconfig/flanneld# flanneld configuration options# etcd URL location.  Point the server where Etcd runsflannel_etcd_endpoints= "http://etcd:2379" # ETCD config key.  This is the configuration key, flannel queries# for address range assignmentflannel_etcd_prefix= "/atomic.io/network" # Any additional options, want to Pass#flannel_options= ""
5.3 Configuring the key for flannel in ETCD

Flannel is configured with ETCD to ensure configuration consistency across multiple flannel instances, so the following configuration is required on the ETCD: ('/atomic.io/network/config ' This key with the above/etc/sysconfig/ The configuration item in flannel Flannel_etcd_prefix is relative, the wrong start will be error)

[[email protected] ~]# etcdctl mk/atomic.io/network/config ' {"Network": "10.0.0.0/16"} ' {"Network": "10.0.0.0/16"}
5.4 Start

After you start flannel, you need to restart Docker, Kubernete in turn.

In master execution:

Systemctl enable Flanneld.service Systemctl start Flanneld.service service Docker restartsystemctl restart Kube-apiserver.servicesystemctl Restart Kube-controller-manager.servicesystemctl Restart Kube-scheduler.service

Execute on Node:

Systemctl enable Flanneld.service Systemctl start Flanneld.service service Docker restartsystemctl restart Kubelet.servicesystemctl Restart Kube-proxy.service

CENTOS7 Deploying Kubernetes Clusters

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.