Kubernetes container cluster management system basic explanation, kubernetes Management SystemKubernetes Overview
Kubernetes is open-source by GoogleContainer Cluster Management SystemIs an open-source version of Google's large-scale container management technology Brog, which includes the following features:
Container-based application deployment, maintenance, and rolling upgrading of Server Load balancer, service discovery cross-machine and cross-region cluster scheduling, automatic scaling o
1. Introduction2. Environment
Features and Components
Machine name
Manage IP
Service IP
Note
Kubnernetes MASTER/ETCD
Hctjk8smaster01
10.30.2.41
10.30.2.141
Kubnernetes SLAVE/ETCD
Hctjk8sslave01
10.30.2.42
10.30.2.142
Kubnernetes SLAVE/ETCD
:
apiserver: As the entrance of the kubernetes system, it encapsulates the additions and deletions of core objects, which are provided to external customers and internal component calls in a restful interface. The rest objects it maintains are persisted to ETCD, a distributed, strongly-consistent key/value store.
Scheduler: Responsible for the resource scheduling of the cluster, assigning the machine to the new pod. This part o
arranging a host for it to write information to the ETCD. Of course, the things to deal with in this process is far from simple, need to consider a variety of decision factors, such as the same replication controller pod allocated to different hosts, to prevent the host node downtime on the business caused by a great impact, and how to consider resource balance, Thus, the resource utilization rate of the whole cluster is improved. Scheduling Process
analysis of pod State of k8s
Pod from creation to the end of the creation of success will be at different stages, in the source code with Podphase to represent different stages:
Podpending podphase = "Pending"
podrunning podphase = "Running"
podsucceeded podphase = "Succeeded"
podfailed podphase = "Failed"
podunknown podphase = "Unknown"
The complete creation of a pod, usually accompanied by various events, has a total of only 4 species of k8s event types:
Added eventtype = "Added"
Modified
swarm from the source level to achieve the above characteristics.
The first one is the overall architecture diagram.The schema diagram from Daocloud.Registration and discovery of http://blog.daocloud.io/wp-content/uploads/2015/01/swarmarchitecture.jpg work nodes
A node is registered on the Kvstore on the back end when the work node is started, and the path Etcd://ip:2376/docker/swarm/nodeip,worker the current cluster eth0IP registration on the
Original address: https://docs.docker.com/swarm/discovery/
Docker Swarm node found in three ways: Distributed key value Storage, node list, Docker Hub.
Note: The following "host discovery" is equivalent to "node discovery". Storage host discovery using distributed key values
It is recommended that the LIBKV project be used as a swarm node to discover that the LIBKV project is an abstraction layer for the existing distributed key value pair storage.Key value pairs currently supported for storage
overlay network to achieve container communicationDocker1.12 still inherits the overlay network model, and provides a strong network guarantee for its service registration discovery.Docke's registration Discovery principle is actually using a distributed key-value storage as the abstraction layer of storage. Docker 1.12 provides built-in Discovery services so that the cluster does not need to rely on external Discovery services such as consul or ETCD
= 2877d24f51fa53841c1ff8bcc86fe2964a4fdb564adf1961906bb201a206809a3eaae8ade78a0df5d35246593cd64ff9ascene=0 uin=mjk1odmyntyymg%3d%3ddevicetype=imac+macbookpro11%2c4+osx+osx+10.10.5+build (14F27) version= 11020201pass_ticket=1llmadlgj0sajuor9uz11b5wza7zsnusfkht%2bwj1j1p7d81unx2jkzcj47%2f4zwdzKey points: This article introduces a small case to improve the performance of the system, from the task to the system on-line, a total of 12 hours, so that the system can not be run, into a successful system,
This is a creation in
Article, where the information may have evolved or changed.
This series of documentation describes kubernetes all the steps of using a binary deployment cluster, rather than kubeadm deploying the cluster using automated methods;
During deployment, the startup parameters of each component are listed in detail, their meanings, and the problems they may encounter.
Once the deployment is complete, you will understand the interaction principles of each component of the system,
, such as local, you can use the same kube-config as Kubectl to configure the clients. If you're on the cloud, like Gke, you'll need an import Auth Plugin.
The Clientset is generated with Client-gen. If you open pkg/api/v1/tyeps.go, there is a line of comments on the pod definition, called "+genclient=true", which means that you need to generate a client for this type, and if you want to do your own API type extension, The corresponding clients can also be generated in this way.
Clientset
.
Rancher Server (rancher/server): Rancher Management Server, which will run the Web front end and API.
Rancher Agent (rancher/agent): Each node obtains a relatively independent agent to administer the node.
Rancher Kubernetes Agent 1 (rancher/kubernetes-agent): The agent responsible for handling communication between Rancher and Kubernetes.
Rancher Agent Instance (rancher/agent-instance): An image of the proxy instance of the rancher.
Kubernetes E
organized into groups and provides load balancing between containers. Schedule: Which machine the container is running on. Composition: Kubectl: A client command-line tool that acts as an entry for the entire system. Kube-apiserver: Provides the interface as the Rest API service as the control entry for the entire system. Kube-controller-manager: Performs background tasks for the entire system, including node state status, number of pods, association of Pods and service, and so on. Kube-s
First, write in the top
In 16 began to hear the k8s, then dokcer very fire, at that time also studied a part, also known as Docker, follow-up no use scene, so did not continue in-depth study. As the architecture of microservices becomes more and more process, the k8s application scenario is more appropriate. The company recently prepared to use k8s to do micro-service architecture, and k8s technology has matured, many companies have been in the production of large-scale use, so intend
Configuring the Flannel serviceRepeat the k8s installation section Flanneld related content
Step 1:
Nohup./flanneld--listen=0.0.0.0:8888 >>/opt/kubernetes/logs/flanneld.log 2>1 110 Start server process on host
Nohup./flanneld-etcd-endpoints=http://192.168.161.110:2379-remote=192.168.161.110:8888 >>flanenl.log 2> 1 #各minons结点上启动flanneld
/** set up subnets on the ETCD server * *Etcdctl set/coreos.com/netw
The lookup plugin for ansible can be used to read information from external data and then pay a variable. The types of external data information obtained include reading the contents of a file, randomly generating password, executing shell commands, reading Redis key values, and so on. Note that all of the operations of lookup are done on the Ansible console, not on the remote target machine.
Example:
----hosts:test_server remote_user:root tasks:-Name: Get normal file content (files are present
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.