General overview
The following illustration shows a general overview of my initial reading of the document and source code, which can basically be kubernetes from the following three dimensions.
Manipulating objects
Kubernetes open interfaces in restful form, with three rest objects that users can manipulate:
Pod: Kubernetes is the most basic deployment scheduling unit that can contain container, which logically represents an instance of an application. For example, a Web site application is built from the front end, back end, and database, and these three components will run in their respective containers, so we can create a pod containing three container.
Service: Is the route agent abstraction of the pod, which solves the problem of discovery of services between pods. Because the pod's state of operation can be changed dynamically (for example, switching the machine, the indentation process is terminated, etc.), so the access end can not write dead IP to access the service provided by the pod. The introduction of service is designed to ensure that the dynamic change of pod is transparent to the access side, only need to know the service address, the service to provide the agent.
Replicationcontroller: Is the reproduction abstraction of pod, which solves the problem of pod expansion and contraction. Typically, distributed applications need to replicate multiple resources for performance or high availability, and dynamically scale according to load. With Replicationcontroller, we can specify that an application requires several replications, Kubernetes will create a pod for each copy, and ensure that the actual number of pods is always the same as the number of copies (for example, when a pod is currently down, Automatically creates a new pod to replace it.
As you can see, service and Replicationcontroller are just abstractions built on pod, and ultimately for pod, so how do they relate to pod? This introduces the concept of label: label is actually very well understood, is to add a set of Key/value tags that can be used for searching or associating with the pod, and the service and Replicationcontroller are associated with the pod by label. As shown in the following illustration, there are three pods with the label "App=backend", the same label can be specified when creating the service and Replicationcontroller: "App=backend" and then by label Selector mechanism, they are associated with these three pods. For example, when another frontend pod accesses the service, it is automatically forwarded to one of the backend pods.
Functional components
The following illustration shows a cluster architecture diagram in an official document, a typical master/slave model.
Master runs three components:
Apiserver: As the entrance to the kubernetes system, it encapsulates and revises the core objects, and provides the external customers and internal component calls in the RESTful interface mode. The rest object it maintains will be persisted to ETCD (a distributed, strongly consistent, key/value storage).
Scheduler: Responsible for cluster resource scheduling, allocating machines for new pods. This part of the work is divided into a component, which means it is convenient to replace the other scheduler.
Controller-manager: Responsible for the implementation of a variety of controllers, there are two types:
Endpoint-controller: regularly associated service and pod (associated information is maintained by endpoint objects) to ensure that the mapping of service to Pod is always up to date.
Replication-controller: Periodically correlate replicationcontroller and pod to ensure that the number of replicated Replicationcontroller defined is always the same as the number of pods actually running.
Slave (called Minion) runs two components:
Kubelet: Responsible for the control of Docker containers, such as start/stop, monitoring operation status. It periodically obtains the pod assigned to the native from the ETCD and starts or stops the appropriate container based on the pod information. It also receives Apiserver HTTP requests and reports the pod's running status.
Proxy: Provides agent for pod. It periodically obtains all service from ETCD and creates agents based on service information. When a customer pod accesses another pod, the access request is forwarded by native proxy.
Workflows
The most basic three operating objects in Kubernetes are mentioned above: pod, Replicationcontroller and service. The following is a sequence diagram that describes the interaction between kubernetes components and their workflows, starting with their object creation.
Using the sample
Finally, let's go into combat mode, where we run one of the simplest stand-alone examples (all components running on a single machine), designed to get through the basic process.
Build the Environment
In the first step, we need to kuberntes the binary executables of each component. Available in the following two ways:
Download source code Compile yourself:
git clone https://github.com/GoogleCloudPlatform/kubernetes.git cd kubernetes/build/release.sh
Download the tar files that have been compiled and packaged by someone else:
wget https://storage.googleapis.com/kubernetes/binaries.tar.gz
Compile the source code you need to first install a good golang, after compiling the Kubernetes/_output/release-tars folder can be packaged files. Direct downloads do not require the installation of additional software, but may not have the latest version.
In the second step, we also need ETCD binary executables, which are obtained by:
wget https://github.com/coreos/etcd/releases/download/v0.4.6/etcd-v0.4.6-linux-amd64.tar.gz Tar xvf Etcd-v0.4.6-linux-amd64.tar.gz
In the third step, you can start each component:
Etcd
CD etcd-v0.4.6-linux-amd64./etcd
Apiserver
./apiserver \-address=127.0.0.1 \-port=8080 \-portal_net= "172.0.0.0/16" \-etcd_servers=http://127.0.0.1:4001 \- machines=127.0.0.1 \-v=3 \-logtostderr=false \-log_dir=./log
Scheduler
./scheduler-master 127.0.0.1:8080 \-v=3 \-logtostderr=false \-log_dir=./log
Controller-manager
./controller-manager-master 127.0.0.1:8080 \-v=3 \-logtostderr=false \-log_dir=./log
Kubelet
./kubelet \-address=127.0.0.1 \-port=10250 \-hostname_override=127.0.0.1 \-etcd_servers=http://127.0.0.1:4001 \-v=3 \-logtostderr=false \-log_dir=./log
Create pod
Once you've set up your operating environment, you can submit the pod. First, write the pod description file and save it as Redis.json:
{"id": "Redis", "desiredstate": {"manifest": {"version": "V1beta1", "id": "Redis", "containers": [{"Name": "Redis", " Image ": Dockerfile/redis", "Imagepullpolicy": "Pullifnotpresent", "ports": [{"Containerport": 6379, "Hostport": 6379} ]}}, "labels": {"name": "Redis"}}}
Then, kubecfg the command line tool to submit:
./kubecfg-c Redis.json Create/pods
After submission, view pod status through Kubecfg:
#./kubecfg list/pods ID Image (s) Host Labels Status--------------------------------------------------Redis dockerfile /redis 127.0.0.1/name=redis Running
The status is running that the pod is already running in the container and can use the "Docker PS" command to view the container information:
# docker PS CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ae83d1e4b1ec dockerfile/redis:latest "redis-server/etc/r Seconds ago up seconds k8s_redis.caa18858_redis.default.etcd_1414684622_1b43fe35
Create Replicationcontroller
{"id": "Rediscontroller", "apiversion": "V1beta1", "kind": "Replicationcontroller", "desiredstate": {"Replicas": 1, " Replicaselector ": {" name ":" Redis "}," Podtemplate ": {" desiredstate ": {" manifest ": {" version ":" V1beta1 "," id ":" Rediscontroller "," containers ": [{" Name ":" Redis "," image ":" Dockerfile/redis "," Imagepullpolicy ":" Pullifnotpresent " , "ports": [{"Containerport": 6379, "Hostport": 6379}]}}, "labels": {"name": "Redis"}}, "labels": {"name": "Redis"}}
Then, kubecfg the command line tool to submit:
./kubecfg-c Rediscontroller.json Create/replicationcontrollers
After submitting, view the Replicationcontroller status by Kubecfg:
#./kubecfg list/replicationcontrollers ID Image (s) Selector replicas---------------------------------------- Rediscontroller Dockerfile/redis Name=redis 1
At the same time, 1 pod will also be created automatically, even if we deliberately delete the Pod,replicationcontroller will also guarantee the creation of 1 new pod.