Kubernetes Cluster deployment

Source: Internet
Author: User
Tags docker hub etcd cadvisor

Given the popularity of Docker, Google launched Kubernetes management docker cluster, and many people are expected to try. Kubernetes is supported by a large number of companies, and the Kubernetes Cluster deployment Tool integrates the IaaS platform such as Gce,coreos,aws, which is also very convenient to deploy. Given that many of the online materials are based on a number of older versions, this article provides a brief description of the latest kubernetes and its dependent components deployment. Through this article can be more rough run your kubernetes cluster, to elegant also need more work. Deployment is divided into three main steps:

1, prepare the machine and get through the network

If you want to deploy a kubernetes cluster at least 3 machines, one as master two as a minion. If there are 4 machines can also be one as ETCD service, if more can deploy a ETCD cluster and more minion, here with 4 machines For example, here the machine can be a physical machine can also be a KVM virtual machine. Machine list:

master:10.180.64.6 etcd:10.180.64.7 minion1:10.180.64.8 minion2:10.180.64.9 As for the network can use flannel, or openvswitch, this information online a lot, can Google or Baidu. 2. Deploy related Components

Kubernetes installation is divided into 3 parts: ETCD cluster, master node, and minions.

In order to facilitate the use of 4 cloud host as an example to build a kubernetes cluster, the cloud host machine allocation is as follows:

Ip

Role

10.180.64.6

Kubernetes Master

10.180.64.7

ETCD node

10.180.64.8

Kubernetes Minion1

10.180.64.9

Kubernetes Minion2

2.1 ETCD Cluster

This example takes a cloud host as Etcd node, if you want to etcd the cluster, please refer to the following ETCD use introduction to build.

[email protected]:~# curl-l HTTPS://GITHUB.COM/COREOS/ETCD/RELEASES/DOWNLOAD/V2.0.0-RC.1/ Etcd-v2.0.0-rc.1-linux-amd64.tar.gz-o etcd-v2.0.0-rc.1-linux-amd64.tar.gz

[email protected]:~# tar xzvf etcd-v2.0.0-rc.1-linux-amd64.tar.gz

[email protected]:~# cdetcd-v2.0.0-rc.1-linux-amd64

Copy all executable files under Etcd to/bin

2.2. Master Node

Only the Kubernetes installation is involved on the master node, and the first download kubernetes executes the following instructions.

[email protected]: ~#wget https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.8.0/kubernetes.tar.gz

[email protected]:~ #tar-zxvfkubernetes.tar.gz

[email protected]:~ #cdkubernetes/server/kubernetes

[email protected]:~ #tar-zxvfkubernetes-server-linux-amd64.tar.gz

[email protected]:~ #cd server/bin

Copy Kube-apiserver, Kube-controller-manager, Kube-scheduler, Kubecfg, Kubectl to/bin on the master node

2.3. Minion Node

The Minion node involves the installation of Kubernetes, Cadvisor, and Docker, and the master installation is already downloaded kubernetes, and the extracted Kubelet and kube-proxy are copied to all minion.

Copy Kubelet, Kube-proxy to/bin on the Minion node.

(PS: Copy to/bin does not matter, the path of these executable files to add to $path can)

Install Cadvisor:

[email protected]:wget https://github.com/google/cadvisor/releases/download/0.7.1/cadvisor

Directly executable file, no decompression, copy to/bin

Installing Docker:

Installing Docker,kubernetes on Minion calls the Docker API to create pods as a worker container, while kubernetes itself's agent thread can also run inside Docker. This kubernetes upgrade will be easier.

Debian 7 under Install Docker can use Ubuntu source, run the following command:

[email protected]:echo debhttp://get.docker.io/ubuntu Docker main | sudo tee/etc/apt/sources.list.d/docker.list
[email protected]:apt-key adv--keyserver keyserver.ubuntu.com-- Recv-keys36a1d7869245c8950f966e92d8576a8ba88d21e9
[email protected]:apt-getupdate
[email protected]:apt-getinstall-y lxc-docker

Run the dockerversion to see if it is normal.


3. Running Kubernetes cluster

3.1. kubernetes configuration file

The configuration files covered in this section do not necessarily match the configuration files on GCE and the kubernetes installed through Yum, which is a temporary solution that is fully manual installed, if integrated into the cluster under the Kubernetes project, You can run a one-click deployment of kubernetes itself with salt to deploy the entire cluster without manual action, so the configuration file is only available for deployment on platforms that are not yet supported. All required configuration files and execution scripts are packaged as kube-start.tar.gz.

3.1.1. ETCD configuration file

The ETCD configuration file is Cfg-etcd:

Etcd_name= "-nameetcd-1"

ETCD node name, if the ETCD cluster has only one node, this item can be annotated without configuration, the default name is defaults, this name will be used later.

etcd_peer_address= "-initial-advertise-peer-urls http://hostip:7001"

ETCD the communication address of node between clusters, usually specify 7001 or 2380 port, here ETCD node IP is 10.180.64.7, so this configuration is modified to http://10.180.64.7:7001.

etcd_client_address= "-advertise-client-urls http://hostip:4001"

ETCD node external service address, generally specify 4001 or 2379 port, here is modified to http://10.180.64.7:4001.

Etcd_data_dir= "-data-dir/home/data/etcd"

ETCD stores the directory of data, specifying that the same configuration of different directories will result in the creation of different ETCD clusters.

etcd_listen_peer_address= "-listen-peer-urls http://0.0.0.0:7001"

ETCD node listens to the address, if for 0.0.0.0 will listen to all interfaces, here is configured as http://0.0.0.0:7001.

etcd_listen_client_address= "-listen-client-urls http://0.0.0.0:4001"

External service listening address, configured as http://0.0.0.0:4001.

etcd_cluster_members= "-initial-clusteretcd-1=http://ip_etcd-1:7001 etcd-2=http://ip_etcd-2:7001"

ETCD the list of cluster member addresses, because it is internal to the ETCD cluster, so you need to specify 7001 or 2380 ports, there is only one node, and there is no configuration etcd_name, then the default name is defaults, which is configured as default=/http/ 10.180.64.7:70001.

Etcd_cluster_state= "-initial-cluster-statenew"

ETCD the cluster state, new represents a new cluster, and existing indicates that it already exists.

Etcd_args= ""

Additional parameters need to be added, can be added by themselves, all parameters of Etcd can be viewed through etcd-h.

3.1.2. kubernetes Cluster configuration file

Cfg-common:

Kube_etcd_servers= "--etcd_servers=http://10.180.64.7:4001"

ETCD service address, the ETCD service has been started, which is configured as http://10.180.64.7:4001.

Kube_logtostderr= "--logtostderr=true"

Indicates whether the error log is logged to a file or output to stderr.

Kube_log_level= "--v=0"

Log level.

kube_allow_priv= "--allow_privileged=false"

Allows privileged containers to be run.

3.1.3. apiserver configuration file

Cfg-apiserver:

Kube_api_address= "--address=0.0.0.0"

The listening interface, if configured as 127.0.0.1, listens only to localhost, and is configured as 0.0.0.0 to listen on all interfaces, configured as 0.0.0.0.

Kube_api_port= "--port=8080"

Apiserver listening port, default 8080, no modification.

Kube_master= "--master=10.180.64.6:8080"

Apiserver's service address, Controller-manager, scheduler, and Kubelet are all used in this configuration, which is configured as 10.180.64.6:8080

Kubelet_port= "--kubelet_port=10250"

Minion on Kubelet listening port, default 10250, no need to modify

Kube_service_addresses= "--PORTAL_NET=10.254.0.0/16"

Kubernetes the range of IPs that can be assigned, each pod that kubernetes launches, and Serveice will be assigned an IP address, which will be allocated from this range.

Kube_api_args= ""

Additional configuration items need to be added to simply enable a cluster without configuration.

3.1.4. Controller configuration file

Cfg-controller-manager:

Kubelet_addresses= "--machines=10.180.64.8,10.180.64.9"

A list of Minion in the Kubernetes cluster, configured here as 10.180.64.8,10.180.64.9

Kube_controller_manager_args= ""

Additional parameters are required

3.1.5. Scheduler configuration file

Cfg-schedule:

If additional parameters can be added by themselves, no new parameters are added for the time being.

3.1.6. Kubelet configuration file

Cfg-kubelet:

Kubelet_address= "--address=10.180.64.8"

Minion monitor the address, each minion according to the actual IP configuration, here Minion1 on 10.180.64.8,minion2 for 10.180.64.9.

Kubelet_port= "--port=10250"

Listen for the port, do not modify, and if you modify it, you need to modify the configuration items that are involved in the configuration file on master.

Kubelet_hostname= "--hostname_override=10.180.64.8"

Kubernetes See the name of Minion, using Kubecfglist minions when you see it will be this name instead of hostname, set and IP address as easy to identify.

Kubelet_args= ""

Additional parameters to add

3.1.7. Proxy configuration file

Cfg-proxy:

If you have additional parameters to configure yourself, you do not need to add them here.

3.2. Kubernetes Startup

Extract kube-start.tar.gz, copy Cfg-etcd, Kube-etcd to ETCD node, add executable permissions for KUBE-ETCD. Copy the Cfg-common, Cfg-apiserver, Cfg-controller-manager, Cfg-schedule, Apiserver, controller, schedule copy to master, Add executable permissions for Apiserver, controllers, and schedule. Copy Cfg-common, Cfg-kubelet, Cfg-proxy, Cadv, Kube, proxy to all Minion hosts, while ensuring that each minion Cfg-kubelet is correctly modified for CADV, Kube, Proxy to add executable permissions.

First run the ETCD service on ETCD node and execute

[email protected]:./KUBE-ETCD &

Verify that the ETCD is normal and perform on master

[email protected]:curl-l http://10.180.64.7:4001/version

ETCD 2.0.0-rc.1

And then execute them sequentially on master.

[email protected]:./apiserver&

[email protected]:./controller &

[email protected]:./schedule &

The last sequence is executed on all nodes.

[email protected]:./CADV &

[email protected]:./kube &

[email protected]:./proxy &

After all components are running, the status is detected on master.

View cluster status below

[email protected]:~# kubecfg listminions

Minionidentifier Labels

----------         ----------

10.180.64.9

10.180.64.8

You can see that there are two nodes in the cluster 10.180.64.8 and 10.180.64.9, which is the 2 nodes deployed.

View pods for the current cluster

[email protected]: ~#kubecfg list pods
Name Image (s) Host Labels Status
---------- ---------- ---------- ---------- ----------
e473c35e-961d-11e4-bc28-fa163e8b5289 Dockerfile/redis 10.180.64.9/name=redisrunning

Here's the redis you don't see, if you just created a cluster this time there is no pod, of course, if you are in GCE or AWS on a key created by default, you may see the kubernetes named Pod, which is the default boot status monitoring.

The cluster has been created, so let's create a Tomcat replicationcontroller to play with. There are a variety of interface ways to achieve this, choose JSON here, you need to write a Tomcat-controller.json file to tell kubernetes how to create this controller. Of course, the name of the file can be arbitrary, as long as you can understand the line. Tomca-controller.json probably looks like this:

{

"id": "Tomcatcontroller",

"Kind": "Replicationcontroller",

"Apiversion": "V1beta1",

"Desiredstate": {

"Replicas": 2,

"Replicaselector": {"name": "Tomcatcluster"},

"Podtemplate": {

"Desiredstate": {

"Manifest": {

"Version": "V1beta1",

"id": "Tomcat",

"Containers": [{

"Name": "Tomcat",

"Image": "Tutum/tomcat",

"Ports": [{

"Containerport": 8080, "Hostport": 80}

]

}]

}

},

"Labels": {"name": "Tomcatcluster"}}

},

"Labels": {

"Name": "Tomcatcluster",

}

}

Inside the meaning of the values after reading Kubernetes implementation analysis will understand. After writing the file, let Kubernetes execute it.

[email protected]:/home/pod# Kubecfg-ctomcat-pod.json Create Replicationcontrollers

If I tell you success, then you can see the controller of the group

[email protected]:/home/pod# kubecfg listreplicationcontrollers

Name Image (s) Selector replicas

----------          ----------          ----------           ----------

Rediscontroller Dockerfile/redis Name=redis 1

Tomcatcontroller Tutum/tomcat Name=tomcatcluster 2

Please ignore Redis, this time see Tomcat Replicationcontroller has been up, Replicas=2 said to run 2 Docker,docker in the cluster running image is Tutum/tomcat, If your minion does not have this image then Kubernetes will go to the Docker hub for you to download, if there is a local image then kubernetes directly on the minion for you to run 2 Tomcat container (pod), To see if all this is true.

[email protected]:/home/pod# kubecfg listpods

Name Image (s) Host Labels St ATUs

----------                                  ----------          ----------           ----------               ----------

643582db-97d1-11e4-aefa-fa163e8b5289 Tutum/tomcat 10.180.64.9/name=tomcatcluster Running

e473c35e-961d-11e4-bc28-fa163e8b5289 Dockerfile/redis 10.180.64.9/name=redis Running

64348fde-97d1-11e4-aefa-fa163e8b5289 Tutum/tomcat 10.180.64.8/name=tomcatcluster Running

For more use, see the interface section.



From for notes (Wiz)

Kubernetes Cluster deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.