Detailed guide for manual installation and deployment of Kubernetes on Ubuntu, ubuntukubernetes

Source: Internet
Author: User
Tags etcd k8s

Detailed guide for manual installation and deployment of Kubernetes on Ubuntu, ubuntukubernetes

Background

Two Ubuntu16.04 servers: 192.168.56.160 and 192.168.56.161.

Kubernetes version: 1.5.5

Docker version: 1.12.6

Etcd version: 2.2.1

Flannel version: 0.5.6

Among them, 160 servers are both master nodes of Kubernetes and node nodes. 161 servers are only node nodes.

The kube-apiserver, kube-controller-manager, kube-schedver, and etcd services must be deployed on the master node.

Deploy on a node: kubelet, kube-proxy, docker, and flannel services.

Download

Download Kubernetes

Client Binary Download: https://dl.k8s.io/v1.5.5/kubernetes-client-linux-amd64.tar.gz

Server Binary Download: https://dl.k8s.io/v1.5.5/kubernetes-server-linux-amd64.tar.gz

My server is linux and amd64. If you have other environments, you can download them on the page.

Under the kubernetes directory of the executable file, kube-apiserver, kube-controller-manager, kubectl, kubelet, kube-proxy, and kube-scheduler in server and client are all copied to the/usr/bin/directory.

Download etcd

The github release of etcd is downloaded on AWS S3 (Click here). I cannot access the network or it is very slow. I can download the package from the Linux community resource station.

In addition, you can compile the source code of etcd to obtain the executable files of etcd.

Copy the etcd and etcdctl executable files of etcd to the/usr/bin/directory.

Etcd-v2.2.1-linux-amd64.tar can go to the Linux commune Resource Station download:

------------------------------------------ Split line ------------------------------------------

Free in http://linux.linuxidc.com/

The username and password are both www.linuxidc.com

The specific download directory is available in/June,/June 5,/Ubuntu for manual installation and deployment of Kubernetes detailed guide/

Download Method see http://www.linuxidc.com/Linux/2013-07/87684.htm

------------------------------------------ Split line ------------------------------------------

Download flannel

Flannel and etcd are both coreOS products, so the download of flannel github release is also stored on AWS S3. Fortunately, the compilation of flannel is very simple. download it from github and compile it directly. Then, the flannel executable file is generated in the bin or dist directory of flannel (different versions may lead to different directories.

$ Git clone-B v0.5.6 https://github.com/coreos/flannel.git

$ Cd flannel

$./Build

The compilation method may vary. For details, refer to the README. md file in the flannel directory.

Copy the executable file flanneld to the/usr/bin/directory.

Create the/usr/bin/flannel directory and copy the mk-docker-opts.sh file under the dist directory to/usr/bin/flannel.

Kubernetes master Configuration

Etcd Configuration

Create a data directory

$ Sudo mkdir-p/var/lib/etcd/

Create configuration directories and files

$ Sudo mkdir-p/etc/etcd/

$ Sudo vim/etc/etcd. conf

ETCD_NAME = default

ETCD_DATA_DIR = "/var/lib/etcd /"

ETCD_LISTEN_CLIENT_URLS = "http: // 0.0.0.0: 2379"

ETCD_ADVERTISE_CLIENT_URLS = "http: // 192.168.56.160: 2379"

Create an systemd File

$ Sudo vim/lib/systemd/system/etcd. service

[Unit]

Description = Etcd Server

Documentation = https://github.com/coreos/etcd

Afterdomainnetwork.tar get

[Service]

User = root

Type = Policy

EnvironmentFile =-/etc/etcd. conf

ExecStart =/usr/bin/etcd

Restart = on-failure

RestartSec = 10 s

Maid = 40000

[Install]

Wantedbypolicmulti-user.tar get

Start the service

$ Sudo systemctl daemon-reload

$ Sudo systemctl enable etcd

$ Sudo systemctl start etcd

Test Service port

$ Sudo systemctl status etcd

● Etcd. service-Etcd Server

Loaded: loaded (/lib/systemd/system/etcd. service; enabled; vendor preset: enabled)

Active: active (running) since Mon 2017-03-27 11:19:35 CST; 7 s ago

...

Check whether the port is opened normally.

$ Netstat-apn | grep 2379

Tcp6 0 0: 2379: * LISTEN 7211/etcd

Create an etcd Network

$ Etcdctl set/coreos.com/network/config '{"Network": "192.168.4.0/24 "}'

If the etcd cluster is deployed, perform the preceding steps on each etcd server. But I only use standalone here, so my etcd service is done.

General configurations of Kubernetes

Create a Kubernetes configuration directory

$ Sudo mkdir/etc/kubernetes

Common Kubernetes configuration file

In the/etc/kubernetes/config file, General configurations of Kubernetes components are stored.

$ Sudo vim/etc/kubernetes/config

KUBE_LOGTOSTDERR = "-- logtostderr = true"

KUBE_LOG_LEVEL = "-- v = 0"

KUBE_ALLOW_PRIV = "-- allow-privileged = false"

KUBE_MASTER = "-- master = http: // 192.168.56.160: 8080"

Configure the kube-apiserver Service

On the master host of Kubernetes.

Create a kube-apiserver configuration file

The dedicated configuration file for kube-apiserver is/etc/kubernetes/apiserver.

$ Sudo vim/etc/kubernetes/apiserver

###

# Kubernetes system config

#

# The following values are used to configure the kube-apiserver

#

# The address on the local server to listen.

KUBE_API_ADDRESS = "-- address = 0.0.0.0"

# KUBE_API_ADDRESS = "-- insecure-bind-address = 127.0.0.1"

# The port on the local server to listen on.

KUBE_API_PORT = "-- port = 8080"

# Port minions listen on

KUBELET_PORT = "-- kubelet-port = 10250"

# Comma separated list of nodes in the etcd cluster

KUBE_ETCD_SERVERS = "-- etcd-servers = http: // 192.168.56.160: 2379"

# Address range to use for services

KUBE_SERVICE_ADDRESSES = "-- service-cluster-ip-range = 192.168.4.0/24"

# Default admission control policies

KUBE_ADMISSION_CONTROL = "-- admission-control = NamespaceLifecycle, NamespaceExists, LimitRanger, ResourceQuota"

# Add your own!

KUBE_API_ARGS = ""

Create an systemd File

$ Sudo vim/lib/systemd/system/kube-apiserver.service

[Unit]

Description = Kubernetes API Server

Documentation = https://github.com/GoogleCloudPlatform/kubernetes

Afterdomainnetwork.tar get

After = etcd. service

Wants = etcd. service

[Service]

User = root

EnvironmentFile =-/etc/kubernetes/config

EnvironmentFile =-/etc/kubernetes/apiserver

ExecStart =/usr/bin/kube-apiserver \

$ KUBE_LOGTOSTDERR \

$ KUBE_LOG_LEVEL \

$ KUBE_ETCD_SERVERS \

$ KUBE_API_ADDRESS \

$ KUBE_API_PORT \

$ KUBELET_PORT \

$ KUBE_ALLOW_PRIV \

$ KUBE_SERVICE_ADDRESSES \

$ KUBE_ADMISSION_CONTROL \

$ KUBE_API_ARGS

Restart = on-failure

Type = Policy

Maid = 65536

[Install]

Wantedbypolicmulti-user.tar get

Configure the kube-controller-manager Service

Create a kube-controller-manager Configuration File

The dedicated configuration file for kube-controller-manager is/etc/kubernetes/controller-manager.

$ Sudo vim/etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS = ""

Create an systemd File

$ Sudo vim/lib/systemd/system/kube-controller-manager.service

[Unit]

Description = Kubernetes Controller Manager

Documentation = https://github.com/GoogleCloudPlatform/kubernetes

After = etcd. service

After = kube-apiserver.service

Requires = etcd. service

Request = kube-apiserver.service

[Service]

User = root

EnvironmentFile =-/etc/kubernetes/config

EnvironmentFile =-/etc/kubernetes/controller-manager

ExecStart =/usr/bin/kube-controller-manager \

$ KUBE_LOGTOSTDERR \

$ KUBE_LOG_LEVEL \

$ KUBE_MASTER \

$ KUBE_CONTROLLER_MANAGER_ARGS

Restart = on-failure

Maid = 65536

[Install]

Wantedbypolicmulti-user.tar get

Configure the kube-scheduler Service

Create a kube-scheduler configuration file

The dedicated configuration file for kube-scheduler is/etc/kubernetes/schedtes.

$ Sudo vim/etc/kubernetes/scheduler

KUBE_SCHEDULER_ARGS = ""

Create an systemd File

$ Sudo vim/lib/systemd/system/kube-scheduler.service

[Unit]

Description = Kubernetes Scheduler

Documentation = https://github.com/kubernetes/kubernetes

[Service]

User = root

EnvironmentFile =-/etc/kubernetes/config

EnvironmentFile =-/etc/kubernetes/scheduler

ExecStart =/usr/bin/kube-scheduler \

$ KUBE_LOGTOSTDERR \

$ KUBE_MASTER

Restart = on-failure

Maid = 65536

[Install]

Wantedbypolicmulti-user.tar get

Start the service of the Kubernetes master node

$ Sudo systemctl daemon-reload

$ Sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler

$ Sudo systemctl start kube-apiserver kube-controller-manager kube-schedver

Kubernetes node configuration

The/etc/Kubernetes/config file also needs to be configured for the kubernetes node. The content is consistent with that of the Kubernetes mater node.

Flannel Configuration

Create configuration directories and files

$ Sudo vim/etc/default/flanneld. conf

# Flanneld configuration options

# Etcd url location. Point this to the server where etcd runs

FLANNEL_ETCD_ENDPOINTS = "http: // 192.168.56.160: 2379"

# Etcd config key. This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_PREFIX = "/coreos.com/network"

# Any additional options that you want to pass

# FLANNEL_OPTIONS = ""

The FLANNEL_ETCD_PREFIX option is the configured etcd network.

Create an systemd File

$ Sudo vim/lib/systemd/system/flanneld. service

[Unit]

Description = Flanneld

Documentation = https://github.com/coreos/flannel

Afterdomainnetwork.tar get

After = etcd. service

Before = docker. service

[Service]

User = root

EnvironmentFile =/etc/default/flanneld. conf

ExecStart =/usr/bin/flanneld \

-Etcd-endpoints =ts {FLANNEL_ETCD_ENDPOINTS }\

-Etcd-prefix =$ {FLANNEL_ETCD_PREFIX }\

$ FLANNEL_OPTIONS

ExecStartPost =/usr/bin/flannel/mk-docker-opts.sh-k DOCKER_OPTS-d/run/flannel/docker

Restart = on-failure

Type = Policy

Maid = 65536

[Install]

Wantedbypolicmulti-user.tar get

RequiredBy = docker. service

Start the service

$ Sudo systemctl daemon-reload

$ Sudo systemctl enable flanneld

$ Sudo systemctl start flanneld

Check whether the service is started

$ Sudo systemctl status flanneld

● Flanneld. service-Flanneld

Loaded: loaded (/lib/systemd/system/flanneld. service; enabled; vendor preset: enabled)

Active: active (running) since Mon 2017-03-27 11:59:00 CST; 6 min ago

...

Docker Configuration

Docker Installation

Use apt to install docker.

$ Sudo apt-y install docker. io

Apply flannel to the docker Network

Modify the systemd configuration file of docker.

$ Sudo mkdir/lib/systemd/system/docker. service. d

$ Sudo vim/lib/systemd/system/docker. service. d/flannel. conf

[Service]

EnvironmentFile =-/run/flannel/docker

Restart the docker service.

$ Sudo systemctl daemon-reload

$ Sudo systemctl restart docker

Check whether docker has a flannel network.

$ Sudo ps-ef | grep docker

Root 11285 1? 00:00:01/usr/bin/dockerd-H fd: // -- bip = 192.168.4.129/25 -- ip-masq = true -- mtu = 1472

...

Configure the kubelet Service

Create a kubelet data directory

$ Sudo mkdir/var/lib/kubelet

Create a kubelete configuration file

The dedicated kubelet configuration file is/etc/kubernetes/kubelet.

$ Sudo vim/etc/kubernetes/kubelet

KUBELET_ADDRESS = "-- address = 127.0.0.1"

KUBELET_HOSTNAME = "-- hostname-override = 192.168.56.161"

KUBELET_API_SERVER = "-- api-servers = http: // 192.168.56.160: 8080"

# Pod infrastructure container

KUBELET_POD_INFRA_CONTAINER = "-- pod-infra-container-image = registry.access.RedHat.com/rhel7/pod-infrastructure:latest"

KUBELET_ARGS = "-- enable-server = true -- enable-debugging-handlers = true"

Create an systemd File

$ Sudo vim/lib/systemd/system/kubelet. service

[Unit]

Description = Kubernetes Kubelet

Documentation = https://github.com/GoogleCloudPlatform/kubernetes

After = docker. service

Requires = docker. service

[Service]

WorkingDirectory =/var/lib/kubelet

EnvironmentFile =-/etc/kubernetes/config

EnvironmentFile =-/etc/kubernetes/kubelet

ExecStart =/usr/bin/kubelet \

$ KUBE_LOGTOSTDERR \

$ KUBE_LOG_LEVEL \

$ KUBELET_API_SERVER \

$ KUBELET_ADDRESS \

$ KUBELET_PORT \

$ KUBELET_HOSTNAME \

$ KUBE_ALLOW_PRIV \

$ KUBELET_POD_INFRA_CONTAINER \

$ KUBELET_ARGS

Restart = on-failure

KillMode = process

[Install]

Wantedbypolicmulti-user.tar get

Start the kubelet Service

$ Sudo systemctl daemon-reload

$ Sudo systemctl enable kubelet

$ Sudo systemctl start kubelet

Configure the kube-proxy service

Create a kube-proxy configuration file

The dedicated configuration file for kube-proxy is/etc/kubernetes/proxy.

$ Sudo vim/etc/kubernetes/proxy

# Kubernetes proxy config

# Default config shoshould be adequate

# Add your own!

KUBE_PROXY_ARGS = ""

Create an systemd File

$ Sudo vim/lib/systemd/system/kube-proxy.service

[Unit]

Description = Kubernetes Proxy

Documentation = https://github.com/GoogleCloudPlatform/kubernetes

Afterdomainnetwork.tar get

[Service]

EnvironmentFile =-/etc/kubernetes/config

EnvironmentFile =-/etc/kubernetes/proxy

ExecStart =/usr/bin/kube-proxy \

$ KUBE_LOGTOSTDERR \

$ KUBE_LOG_LEVEL \

$ KUBE_MASTER \

$ KUBE_PROXY_ARGS

Restart = on-failure

Maid = 65536

[Install]

Wantedbypolicmulti-user.tar get

Start kube-proxy service

$ Sudo systemctl daemon-reload

$ Sudo systemctl enable kube-proxy

$ Sudo systemctl start kube-proxy

Query node status

Run the kubectl get node command to view the node status. When the status is Ready, it indicates that the node has been successfully connected to the master. If not, you need to locate the cause on the node. You can run the journalctl-u kubelet. service command to view the logs of the kubelet service.

$ Kubectl get node

NAME STATUS AGE

192.168.56.160 Ready d

192.168.56.161 Ready d

Kubernetes Test

Test whether Kubernetes is successfully installed.

Compile a yaml File

Create nginx. yaml on the Kubernetes master to create an nginx ReplicationController.

$ Vim rc_nginx.yaml

ApiVersion: v1

Kind: ReplicationController

Metadata:

Name: nginx

Labels:

Name: nginx

Spec:

Replicas: 2

Selector:

Name: nginx

Template:

Metadata:

Labels:

Name: nginx

Spec:

Containers:

-Name: nginx

Image: nginx

Create a pod

Run the kubectl create command to create the ReplicationController. The ReplicationController configuration has two copies and our environment has two Kubernetes nodes. Therefore, it should run a Pod on both nodes.

Note: This process may take a long time. It will pull the nginx image from the Internet, as well as the key image pod-infrastructure.

$ Kubectl create-f rc_nginx.yaml

Query status

Run the kubectl get pod and rc commands to view the pod and rc statuses. The container may be in the containerCreating status at the beginning. After the required image is downloaded, a specific container is created. The pod status should display the Running status.

$ Kubectl get rc

NAME DESIRED CURRENT READY AGE

Nginx 2 2 2 m

$ Kubectl get pod-o wide

NAME READY STATUS RESTARTS AGE IP NODE

Nginx-j5x4 1/1 Running 0 m 192.168.4.130 192.168.56.160

Nginx-bd28 1/1 Running 0 m 192.168.4.130 192.168.56.161

Success !!!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.