Kubernetes is a docker-based, open-source container cluster management system initiated and maintained by the Google team that not only supports common cloud platforms, but also supports internal data centers.
Built on top of Docker, Kubernetes can build a container scheduling service that allows users to manage cloud container clusters through kubernetes clusters without requiring users to perform complex setup tasks. The system will automatically select the appropriate working node to perform the specific container cluster scheduling processing work. The core concept is the container Pod (container compartment). A pod consists of a set of containers that work on the same physical work node. These group containers have the same network namespace/IP and storage quotas, and each pod can be port-mapped according to the actual situation. In addition, the Kubernetes work node is managed by the primary system, which contains the services that are used to run the Docker container.
About Docker kubernetes Introduction Project
Kubernetes is an open source project launched by the Google team, which aims to manage containers across multiple hosts, provide basic deployment, maintenance, and use scaling, primarily to implement the language as the Go language. Kubernetes is:
- Easy to learn: lightweight, simple, easy to understand
- Portable: Supports public cloud, private cloud, hybrid cloud, and multiple cloud platforms
- Scalable: Modular, pluggable, support hooks, can be any combination
- Self-healing: Automatic rescheduling, automatic restart, automatic replication
Kubernetes built on Google's decades of experience, more than half from the scale of Google's production environment. Combines the best ideas and practices of the community.
In distributed systems, deployment, scheduling, and scaling have always been the most important and basic functions. Kubernets is hoping to solve this sequence problem.
Kubernets is currently in github.com/googlecloudplatform/kubernetes for maintenance, as of the latest version of 0.7. Version 2.
Kubernetes can run anywhere!
Although Kubernets was originally customized for GCE, additional cloud platform support was added in subsequent releases, as well as support for on-premises data centers.
Docker kubernetes Architecture Design Basic architecture and basic concepts
Any good project is inseparable from the best architecture and Design blueprint, and in this section we will look at how Kubernetes is planning its architecture. To understand and use kubernets, we need to understand the basic concepts and roles of kubernetes.
Architecture Design
- Node: A node is a host that runs Kubernetes.
- Container group: A pod corresponds to a container group of several containers that share a storage volume (volume) with a container within the same group.
- Container group life cycle: Contains all container state collections, including container group state types, container group life cycle, events, restart policies, and replication controllers.
- Replication Controllers: Primarily responsible for specifying the number of pods that run together at the same time.
- Service: A kubernetes service is a high-level abstraction of container group logic and also provides a policy for accessing container groups externally.
- Volume: A volume is a directory in which the container has access rights.
- Tags: tags are used to connect a group of objects, such as a container group. Tags can be used to organize and select child objects.
- Interface permissions: Firewall rules for ports, IP addresses, and proxies.
- Web interface: Users can manipulate kubernetes through the Web interface.
- Command-line action:
kubecfg
command.
Node what is a node
In Kubernetes, the node is the point of actual work, formerly known as Minion. A node can be either a virtual machine or a physical machine, depending on a clustered environment. Each node has some necessary services to run the container groups, and they can all be managed through the master node. Essential services include docker,kubelet and network agents.
Container status
The container state is used to describe the current state of the node. Now, it contains three information:
Host IP
Host IP requires a cloud platform to query, kubernetes it as part of the state to save. If Kubernetes is not running on the cloud platform, the node ID is required. IP addresses can vary and can contain multiple types of IP addresses, such as public IP, private IP, dynamic Ip,ipv6, and so on.
Node cycle
Usually the node has Pending
, Running
Terminated
three cycles, if Kubernetes discovers a node and it is available, then kubernetes marks it as Pending
. Then at some point, Kubernetes will mark it as Running
. The end period of a node is called Terminated
. A node that is already terminated does not accept and dispatch any requests, and the container groups that are already running on them are also deleted.
Node state
The state of a node is primarily used to describe the node in which it is located Running
. There are currently available NodeReachable
and NodeReady
. Additional states may be added later. NodeReachable
indicates that the cluster is up to. NodeReady
represents Kubelet return Statusok and HTTP status check health.
Node Management
Nodes are not created kubernetes, but are created by the cloud platform, or are physical machines, virtual machines. In Kubernetes, the node is simply a record, and after the node is created, Kubernetes checks to see if it is available. In Kubernetes, nodes are saved with the following structure:
{ "id": "10.1.2.3", "kind": "Minion", "apiVersion": "v1beta1", "resources": { "capacity": { "cpu": 1000, "memory": 1073741824 }, }, "labels": { "name": "my-first-k8s-node", },}
The Kubernetes check node is available depending on the ID. In the current version, there are two interfaces that can be used to manage nodes: node control and Kube management.
Node control
In the Kubernetes master node, the node controller is the component used to manage the nodes. Mainly includes:
- Cluster-wide node synchronization
- Single node life cycle management
Node control has a synchronous wheel seek, which listens to virtual instances of all cloud platforms, and is created and deleted based on node state. It is possible --node_sync_period
to control the wheel hunt by means of a flag. If an instance has already been created, node control will create a structure for it. Similarly, if a node is deleted, node control also deletes the structure. The specified node can be displayed by token when Kubernetes is started --machines
. You can also use an Add node with one bar kubectl
, and the two are the same. By setting --sync_nodes=false
a tag to disallow node synchronization between clusters, you can also use the API/KUBECTL command line to delete nodes.
Container Group
In Kubernetes, the smallest unit used is the container group, which is the smallest unit that is created, dispatched, and managed by the container group.
What is a container group
A container group uses the same Dokcer container and shares the volume (Mount point). A container group is a packaged collection of specific uses, containing one or more containers.
Like a running container, a container group is considered to have only a very short run cycle. A container group is dispatched to a set of nodes to run, knowing that the container's life cycle is over or that it is deleted. If the node dies, the container group on which it is run will be deleted instead of dispatched. (The move of the container group may be added in a future release).
The original intention of container group design resource sharing and communication
Container groups are primarily for data sharing and communication between them.
In a container group, containers use the same network address and port, and can communicate with each other through the local network. Each container group has a separate IP that can be used to communicate with other physical hosts or containers over the network.
A container group has a set of storage volumes (mount points), primarily to allow the container to not lose data after a reboot.
Container Group Management
A container group is a high-level abstraction that uses management and deployment, and is also an interface to a set of containers. A container group is the smallest unit of deployment, horizontal scaling.
Use of container groups
Container groups can be combined to construct complex applications whose original meaning consists of:
- Content management, file and data loading, and local cache management.
- Log and checkpoint backup, compression, snapshots, etc.
- Monitor data changes, track logs, log and monitor agents, publish messages, and more.
- Agent, Network Bridge
- Controller, management, configuration, and updates
Alternative Solutions
Why not run multiple programs in a single container?
- 1. Transparency. In order to keep the containers in the container group consistent infrastructure and services, such as process management and resource monitoring. This design is for the convenience of the user.
- 2. Dependencies between the solution software. Each container can be rebuilt and published, and Kubernetes must support hot release and hot update (in the future).
- 3. Easy to use. Users do not have to run separate program management or worry about the exit status of each application.
- 4. Efficient. Given that the infrastructure has more responsibilities, the containers must be lightweight.
Container group life cycle
This summary will briefly describe the container state type, container group life cycle, events, restart policy, and replication controller.
Status value Pending
The container group has been accepted by the node, but one or more containers have not yet run. This will include the time at which some nodes are downloading the mirror, which is dependent on the network situation.
Running
The container group has been dispatched to the node, and all the containers have been started. At least one container is in a running state (or in a restart state).
Succeeded
All containers are exited normally.
Failed
All containers in the container group were accidentally interrupted.
Container group life cycle
Typically, if a container group is created, it is not automatically destroyed, unless it is triggered by a behavior that might be artificial, or the replication controller does. The only exception is that the container group is successfully exited by the succeeded state, or retries several times within a certain amount of time.
If a node dies or is unable to connect, the node controller will flag the status of the container group on it failed
.
Example
- Container group status
running
, with 1 containers, container exit gracefully-Log completion events
- If the restart policy is:-Always: Restart the container, the container group remains
running
- When failed: Container group becomes
succeeded
Never: a container group becomessucceeded
- Container group status
running
, 1 containers, container exception exit-Log failed events
- If the restart policy is:-Always: Restart the container, the container group remains
running
- On Failure: Restart container, container group remains
running
Never: a container group becomesfailed
- Container group status
running
, 2 containers, 1 container exception exits-Log failed events
- If the restart policy is:-Always: Restart the container, the container group remains
running
- On Failure: Restart container, container group remains
running
Never: Container group hold running
-when there are 2 containers exiting
- Log failed Events
- If the restart policy is:-Always: Restart the container, the container group remains
running
- On Failure: Restart container, container group remains
running
Never: a container group becomesfailed
- Container group status
running
, insufficient container memory-Flag Container error Interrupt
- Log out of memory events
- If the restart policy is:-Always: Restart the container, the container group remains
running
- On Failure: Restart container, container group remains
running
Never: Log error events, container groups becomefailed
- Container group status
running
, piece of disk dead-kill all containers
- Logging Events
- Container Group becomes
failed
If the container group is running under one controller, the container group will be recreated elsewhere
- Container group status
running
, corresponding node segment overflow-node controller waits for timeout
- Node Controller Tag Container Group
failed
- If the container group is running under one controller, the container group will be recreated elsewhere
Replication Controllers Service Volume label interface Permissions Web interface command line Operations
Docker Kubernetes Project