[Kubernetes] Kubernetes's Network model

Source: Internet
Author: User



The Kubernetes network model is made up of four parts from inside to outside:


    1. The network where the pod interior container resides
    2. The network where the pod resides
    3. Network for communication between pod and service
    4. Network of communication between outside and service





It is recommended that you understand the network model of Docker before reading this article. Refer to the author's first two articles [Kubernetes]docker's network model and [Kubernetes]docker's overlay network model.


1. The network where the pod internal container resides and the pod





Kubernetes uses a "ip-per-pod" network model: An IP address is assigned to each pod, and the container inside the pod shares the pod's network space, which means that they share the pod's Nic and IP. How did kubernetes do it? Smart readers must have thought of the Docker container network described in Kubernetes]docker's network model. Kubernetes when creating the pod, a pod container running on the bridge network is created on the node's Docker, creating a virtual NIC eth0 for the pod container and assigning an IP address. The container in the pod, called the app container, uses--net=container when it is created: to share the network space of the pod container.



For example, the author has a kubernetes node host PT-169124, with an IP address of 192.168.169.124. The author's environment uses flannel as the bridge network drive of Docker, with ifconfig command can see flannel network device:


Flannel0:flags=4305<up,pointopoint,running,noarp,multicast> MTU 1472 inet 172.17.17.0 netmask 255.255.0.0  Destination 172.17.17.0 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen (unspec) RX Packets 7988 Bytes 410094 (400.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 12681 bytes 17640033 (16.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


This flannel0 is a bridge created by Kubelet with flannel and is used as the--bridge parameter value when the Docker process starts:--bridge=flannel0, This allows the container created in Docker to use the IP address in the FLANNEL0 network segment by default. Look at the Docker containers running on this host (in the format, only the container ID and name are shown here):


# docker Pscontainer ID       names7672d97e01d1       k8s_kubernetes-dashboard.979fa630_ Kubernetes-dashboard-4164430742-lqhcg_kube-system_a08a0d66-57bf-11e6-84b8-5cf3fcba84a8_086f7305431338595af6       k8s_pod.42430ae1_kubernetes-dashboard-4164430742-lqhcg_kube-system_a08a0d66-57bf-11e6-84b8-5cf3fcba84a8_ e96d8681


These two containers are pod containers and app containers created by the author in the Kubernetes-dashboard of the [kubernetes]kubernetes Cluster and Docker Private library build (CentOS 7) article. The pod container is running in the bridge network driven by flannel:


# Docker Network Inspect bridge[    {        "Name": "Bridge",         "Id": " 6031ecf132905a10b6500a6911d914aff2e15ca8894225aa59ca34bf965b902e ",       " Scope ":" Local ",       " Driver ":" Bridge ",         "IPAM": {            "Driver": "Default",             "Options": null,             "Config": [                 {                     "Subnet": "172.17.17.1/24",                     "Gateway ":" 172.17.17.1 "                }           ]        },        "Containers": {             "431338595af6236a3feb661129e17a4aed4d8331c173903bc2aa7a788d494c6d": {                 "Name": "K8s_pod.42430ae1_ kubernetes-dashboard-4164430742-lqhcg_kube-system_a08a0d66-57bf-11e6-84b8-5cf3fcba84a8_e96d8681 ",                 "EndpointId": " 6f22b9c24be10bf069973e0ba651efafdc68e13177e9cbbe3f41b6b4e963eff1 ",                 "MacAddress": "02:42:ac:11:11:03",                 "ipv4address": "172.17.17.3/24",                 "ipv6address": ""             }       },        "Options": {             "Com.docker.network.bridge.default_bridge": " True ",           " COM.DOCKER.NETWORK.BRIDGE.ENABLE_ " : "True",            "com.docker.network.bridge.enable_ip _masquerade ":" True ",           " Com.docker.network.bridge.host_binding_ipv4 ":" 0.0.0.0 ",             "Com.docker.network.bridge.name": "Docker0",             "Com.dockeR.network.driver.mtu ":" 1472 "       }   }] 


Using Docker inspect to see the details of the Kubernetes-dashboard app container, you can see the Networkmode value "container : 431338595af6236a3feb661129e17a4aed4d8331c173903bc2aa7a788d494c6d ", indicating that the app container shares the network space of pod containers, that is, they have the same NIC and IP address.



Kubernetes This "ip-per-pod" network model allows the pod containers to be accessed through the loopback (127.0.0.1, localhost) network with each other, and also means that the container in the pod does not conflict with port allocation. The pod container doesn't have to worry about port collisions with containers in other pods.



More interestingly, Kubernetes has "outsourced" the Docker Bridge Network on each node host to Flannel,flannel's FLANNEL0 network created on those hosts using different intervals of the same subnet. For example, in the context of this article, the Flannel0 networks created by flannel on two node hosts are:



Host PT-169121, IP address 192.168.169.121:


Flannel0:flags=4305<up,pointopoint,running,noarp,multicast> MTU 1472 inet 172.17.13.0 netmask 255.255.0.0  Destination 172.17.13.0 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen (unspec) RX  Packets 12684 Bytes 17640285 (16.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7991 bytes 410346 (400.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


Host PT-169124, IP address 192.168.169.124:


Flannel0:flags=4305<up,pointopoint,running,noarp,multicast> MTU 1472 inet 172.17.17.0 netmask 255.255.0.0  Destination 172.17.17.0 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen (unspec) RX Packets 7988 Bytes 410094 (400.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 12681 bytes 17640033 (16.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


Flannel through ETCD, the Docker Bridge network on each node host is joined together to achieve:


    • All containers can communicate with each other, regardless of whether the container is running on the same node host
    • All containers and node hosts can communicate with each other, regardless of whether the container is running on the node host.
2. Network of communication between pod and service


Kubernetes pod is not a "longevity" of the guy, it will be destroyed and created for various reasons. For example, in the vertical expansion and rolling update process, the old pod will be destroyed, replaced by the new pod. During this time, the pod's IP address may even change. So Kubernetes introduced Service.service is an abstract entity, kubernetes when the service entity is created, assigns it a virtual IP, when we need to access the pod in the container provides functionality, we do not directly use the Pod IP address and port, but access s Ervice this virtual IP and port, the service forwards the request to the pod behind it.



Kubernetes, when creating a service, finds the pod based on the service's label selector (label Selector), creating a endpoints object with the same name as the service. When the address of the pod changes, the endpoints changes as well. When the service accepts the request, it can find the destination address of the request forwarding through endpoints.



For example, the author builds the Kubernetes-dashboard is created as shown in the service, behind which there are two pods in support.






Check the service and endpoints information separately with the KUBECTL get command:


# kubectl get Services/kubernetes-dashboard--namespace=kube-system-o yamlapiversion:v1kind: servicemetadata:  creationtimestamp:2016-08-01t08:12:02z  labels:    App: kubernetes-dashboard  name:kubernetes-dashboard  namespace:kube-system  resourceVersion: "18293"   Selflink:/api/v1/namespaces/kube-system/services/kubernetes-dashboard  uid: a0953fa0-57bf-11e6-84b8-5cf3fcba84a8spec:  clusterip:10.254.213.209  portalip:10.254.213.209  ports: -nodeport:31482    port:80    protocol:tcp    TargetPort : 9090  selector:    app:kubernetes-dashboard  sessionaffinity:none  type: nodeportstatus:  loadbalancer: {} 
# Kubectl describe Endpoints/kubernetes-dashboard--namespace=kube-systemname:kubernetes-dashboardnamespace:ku    be-systemlabels:app=kubernetes-dashboardsubsets:addresses:172.17.13.2,172.17.17.3 NotReadyAddresses: <none> ports:name Port Protocol----------------<unset> 9090 tcpno events.


As you can see, the targetport of the service and the IP address of the pod are recorded in the endpoints with the same name as the service. Let's take a closer look at how the request arrives at the pod when the service is accessed through the service's virtual IP and port.



As I said earlier, the service is just a virtual entity, and the virtual IP of the kube-proxy.service that is running on node nodes is actually completed by the Kube-proxy implementation. Kube-proxy has two request forwarding modes: Userspace mode and iptables mode. The default is userspace mode before the Kubernetes v1.1 version, which is iptables mode by default after the v1.2 version.



Userspace mode : When a service is created, the Kube-proxy on all node nodes randomly open a port (called a proxy port) on the local node on which it resides. Then establish a iptables rule (a Linux package processing logic), Iptables will complete the < service virtual IP, port > and proxy port traffic forwarding, and then select a pod from the endpoints, the agent port traffic to the pod. When there are multiple pods under endpoints, there are two algorithms for selecting pods: 1 loops in turn, and if one pod does not respond, try the next (service.spec.sessionAffinity value is "None"); 2 Select a pod closer to the request source IP (The service.spec.sessionAffinity value is "ClientIP").






The disadvantage of the userspace mode is that it is only applicable to smaller clusters, and the source IP of the pod masking request makes some firewalls based on the source IP invalid.



iptables mode : When the service is created, the Kube-proxy on all node nodes will establish a level two iptables rule, one level for service creation, to transfer < service virtual IP, port > traffic to the backend , another level is created for endpoints, which is used to select pods. When the service.spec.sessionAffinity value is "ClientIP", the iptables mode selects the pod with the same algorithm as the userspace mode. When the service.spec.sessionAffinity value is "None", the pod is randomly selected, so if the selected pod does not respond, it will not attempt to select another pod.






For example, the author's kubernetes environment is the v1.2 version, with the IPTABLES-VL--line-numbers-t NAT command to see the iptables rule created:



Kube-proxy The iptables rule created for the Kubernetes-dashboard service:






Kube-proxy The iptables rule created for Kubernetes-dashboard endpoints:






The iptables mode is faster and more stable than the userspace mode, and there is no problem with the requested source IP.



We introduced how Kubernetes uses service to "proxy" pods, so how is the service created perceived by other pods? For example, how does a front-end pod find the backend service? Kubernetes provides two methods: Environment variables and DNS. This part of the content, Kubernetes's official website has introduced.


3. Network of communication between the outside world and service


If you want to expose the service to the outside world (why not just expose the pod?) I believe the reader has an answer. ), there are three ways:


Type of service


If you do not specify a value for the service's spec.type, the type of service created defaults to the Clusterip type. This type of service only gets the virtual IP and port, which can only be accessed within the Kubernetes cluster.



If the value of Spec.type for the specified service is "Nodeport", the type of service created defaults to the Nodeport type. This type of service, in addition to the virtual IP and port, Kubernetes also assigns ports on all node nodes. The value of the assigned port can be specified by Spec.ports[*].nodeport, or allocated by Knubernetes in a configured interval (default is 30000-32767). This service can be accessed from the Kubernetes cluster via a virtual IP: port or from outside the cluster via the node node's ip:nodeport, for example, the Kubernetes-dashboard service created by the author is the Nodeport type , Kubernetes assigns its nodeport to 31482, which can be accessed by any node's IP (192.168.169.121 or 192.168.169.124).






If the value of Spec.type for the specified service is "LoadBalancer", the type of service created defaults to the LoadBalancer type. This type of service, in addition to the virtual IP and port, Kubernetes also assigns a port to all node nodes and then load-balances it. This service can be accessed from the Kubernetes cluster via a virtual IP: port, through the ip:nodeport of the node node from outside the cluster, and also through load-balanced IP access.


Binding external IP


By specifying the service's spec.externalips[*], the service can be exposed to the external IP of the node. The service is accessible to the outside world through the port of the IP and service.


Using ingress


See Kubernetes official documentation first. After the author has completed the environmental verification.



[Kubernetes] Kubernetes's Network model


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.