Summary: Kubernetes is Google open source container cluster management system, built on Docker, for containerized applications to provide resource scheduling, deployment operations, service discovery, capacity expansion and other functions. Pods are the smallest deployment units that are created, dispatched, and managed, and this article describes the communication and scheduling between these pods in detail
Overview
The Pods in Kubernetes is not immutable. They can be migrated over time, especially when subjected to replicationcontrollers. Although each pod has its own IP address, there is no guarantee that the IP address of each pod will remain unchanged over time. This leads to the question: if there is a series of pods in the Kubernetes cluster (which we call the backend) to provide functionality for the other pods (called the front end), how does the front end find the backend?
Service
The service in Kubernetes is an abstraction that defines a pods logical collection and the policies that access them, sometimes referred to as microservices (Micro-service). The goal of the service is to provide a bridge that allows non-kubernetes native applications to easily access the backend without having to write specific code for kubernetes. The service provides the user with a pair of IP addresses and port ports for redirection to the appropriate backend on access. The selection of the Pods collection in the service is done by a label selector (label selector).
For example, first assume an "image processing" backend, which runs three copies available. These replicas are stateless, and the front end does not care about which copy of the backend they are specifically using. Therefore, although the actual pods that make up the back-end collection may have changed, the front-end user does not need to know about these changes at all. The abstraction of this service enables the decoupling of front-end access and backend services.
Defining services
Here is an example of using a service. In Kubernetes, a service is a rest object, similar to a pod. Like pod, the definition of a service can be done by a POST request to Apiserver to create a new instance. For example, suppose you have a set of pods that exposes 9376 ports and carries a "app=myapp" label.
{ "id": "MyApp", "selector": { "app": "MyApp" }, "Containerport": 9376, "protocol": "TCP", "port": 8765}
The definition above will create a new service called "MyApp", which allows all pods with the "App=myapp" tag to listen on port 9376 on the TCP protocol. The customer can access the service by connecting to the $myapp_service_host via Port $myapp_service_port.
How does the service work?
A service proxy is running on each node in the Kubernetes cluster. The agent application listens to Kubernetes Master to add and remove service objects and endpoints (endpoints, which satisfies the service tag Selector pods), and the proxy app stores a mapping of the service to the endpoint list. It opens a port on the local node for each service and forwards all traffic on that port to the back end. is nominally based on policy, but now the only supported strategy is the rotation schedule (round-robin).
When a pod is programmed, Master adds a set of environment variables for each surviving service. We support the docker-links-compatible variable (see Makelinkvariables) and the simpler {Svcname}_service_host and {svcname}_service_port variables, Where the service name requires capitalization, the dash is converted to an underscore. Specific service work Figure 1. For example, the service "Redis-master" listens on TCP port 637 and assigns an IP address of 10.0.0.11, which produces the following environment variables:
Redis_master_service_host=10.0.0.11redis_master_service_port=6379redis_master_port=tcp://10.0.0.11:6379redis_ master_port_6379_tcp=tcp://10.0.0.11:6379redis_master_port_6379_tcp_proto=tcpredis_master_port_6379_tcp_port= 6379redis_master_port_6379_tcp_addr=10.0.0.11
This means that the service that a pod wants to access must be created before the pod itself is created, otherwise the environment variable will not be loaded. However, after the DNS service is supported, the limit will no longer exist.
The service can resolve to 0 or more endpoints through its tag selector. During the life cycle of a service, the pods collection that makes up the service can be increased, scaled down, or completely invalidated. Users will only experience problems when they are removed from the service when they are using the backend (even if that is the case, the connection that has already been opened will continue for some protocols).
Figure 1: Service work diagram
Detail details
The content of the previous article should suffice for most people who just want to use the service. However, there are a lot of things going on behind this that are worth digging into.
Avoid conflicts
A key idea of kubernetes is that users should not encounter scenarios that could cause their operations to fail, especially if the user itself did not cause an error. In this context, we consider the problem with the network port-you should not let the user choose a port number that might conflict with other users. Otherwise, it will be a failure on the quarantine.
To enable users to select their service port number, we must ensure that there are no conflicts between the two services. We do this by assigning our own IP address to each service.
IP and Portal
Unlike IP addresses that are routed to a fixed pod, the service's IP is not actually responsive by a single master. Instead, we use iptables (packet processing logic in Linux) to define these "virtual" IP addresses that require transparent redirection. We call the tuple of the service IP and service port the portal. When the user connects to the portal, their access is automatically transferred to a corresponding endpoint. In fact, the environment variables of the service are set according to the IP and port of the portal. In addition, we will increase the DNS to support access to the service.
For example, consider the application process shown. When you create a backend service, Kubernetes Master assigns an IP address for the portal, such as 10.0.0.1. Assume that the service port is 1234,portal, which is 10.0.0.1:1234. Master stores this information, which is also obtained by all service proxy instances in the cluster. When the agent detects a new portal, it opens a new random port, establishes a iptables redirect from the portal to the new port, and then begins to accept the connection to it.
When users connect to Myapp_service_host using the portal port (regardless of whether they treat it as a static port or as Myapp_service_port), the iptables rule takes effect and redirects the packet to the port on the service proxy itself. The service proxy selects a backend and starts the agent traffic from the client to the back end. Specific principle 2.
The end result is that users can choose any service port they want without the risk of conflict. Customers can easily connect IP and ports without having to know which pods they are accessing.
Figure 2: IP and portal schematics in service
External services
For some parts of the user application, such as the front end, the user wants to expose a service on an externally accessible IP address (public IP).
If you want your service to be exposed to a public IP address, you can choose to provide a "publicips" list that the service can respond to. These IP addresses are bound on the service port and are mapped to the Pods collection selected by the service. You will then be responsible for ensuring that traffic to the public IP address is sent to one or more kubernetes working nodes. Consistent with the mapping of internal IP addresses, each iptables rule on each host maps packets of a public-specific IP address to an internal service proxy.
For cloud service providers that provide external load balancing devices, there is an easier way to achieve the same effect. On such suppliers (such as GCE), you can leave publicips as a substitute, and you can set the CREATEEXTERNALLOADBALANCER flag on the service. This launches a specific load balancer device for a cloud service provider (assuming it is supported by your cloud provider) and populates the public IP domain with the appropriate values.
Disadvantages
We anticipate that the iptables used by portals will be available on a small scale and not be extended to large clusters with thousands of services. Check out the original design of portals for more details.
Google kubernetes Design Documentation Services-Go