3 port settings in a container network instance service
The concepts of these ports are easily confusing, such as creating a service like this:
[Plain]View PlainCopy
- Apiversion:v1
- Kind:service
- Metadata
- Labels
- Name:app1
- Name:app1
- Namespace:default
- Spec
- Type:nodeport
- Ports
- -<strong>port:8080
- targetport:8080
- Nodeport:30062</strong>
- Selector
- Name:app1
Port
The port, the service is exposed on the service ' s cluster IP (virsual IP). Port is the service port which are accessed by others with cluster IP.
That is, the port here indicates that the service is exposed to ports on the cluster IP,<cluster ip>:p ort is the portal that provides access to the service to internal clients of the cluster.
Nodeport
On top of have a cluster-internal IP, expose the service on a port in each node of the cluster (the same port in each no DE). You'll be able to contact the service on any<nodeip>:nodeportaddress. So nodeport are alse the service port which can be accessed by the node IP by others with external IP.
First, Nodeport is a way for kubernetes to provide access to the service portal to external clients of the cluster (the other way is LoadBalancer), so <nodeip>:nodeport Is the portal that provides access to the service to customers outside the cluster.
Targetport
The port on the pod, the service should proxy traffic to.
Targetport well understood that Targetport is the port on the pod, and the data coming from port and Nodeport eventually go through the kube-proxy into the container on the targetport of the back-end pod.
Port, Nodeport Summary
In general, both port and Nodeport are service ports that are exposed to client access services in the cluster, which exposes to out-of-cluster client access services. Data coming from both ports need to go through the reverse proxy kube-proxy into the targetpod of the backend pod to reach the container on the pod.
When a client connects to the VIP of the iptables rule kicks in, and redirects the packets to the serviceproxy ' s own port (Random port). The service proxy chooses a backend, and starts proxying traffic from the client to the backend. This means the service owers can choose any port they want without risk of collision. the same basic flow executes when traffic comes in through a nodeport or through a loadbalancer, though in those cases The client IP does get altered.
Kube-proxy and Iptables
Once the service has ports and nodeport, it can provide services both internally and externally. So what is the specific principle to achieve it? The secret is the iptables rule that Kube-proxy created on local node.
Kube-proxy maps access to this service address to a local kube-proxy port (random port) by configuring the DNAT rule (access from the container, access from the local host, and both). Then Kube-proxy will listen to the local corresponding port, the access to this port to the agent to the remote real pod address up.
Kube-proxy will generate 4 chain in the Nat table, as shown above ( mainly from the container access, from the local host out of Access two aspects ).
After the service is created, Kube-proxy automatically creates the following two rules on node in the cluster:
Kube-portals-container
Kube-portals-host
In the case of Nodeport, additional two will be generated:
Kube-nodeport-container
Kube-nodeport-host
Kube-portals-container
The <cluster ip>:p ort from the service cluster portal will be redirected to the local kube-proxy port (random port) by the network interface, which is the service access request from the local container ;
Note: I think that the network packet of this situation cannot come from the external network, because the cluster IP is a virtual IP, there is no such route in the external network to send the packet to the native, so the request can only come from the local container, from the local container access, The service access request is entered into the local network interface through the local container virtual network card.
Kube-nodeport-container
The mapping of the request to the local Kube-proxy port (random port) via the service cluster external ingress <node Ip>:nodeport, which is mainly coming from the network interface, is the service access request from the K8s cluster external network, which can come from the native container, It can also come from other node's containers, and can also be from other node's processes ;
Kube-portals-host
This node local process is primarily redirected to a local kube-proxy port (random port) mapping through the service cluster ingress <cluster ip>:p ORT requests.
Kube-nodeport-host
This node local process is primarily redirected to a local kube-proxy port (random port) mapping through requests from the service cluster's external ingress <node ip>:nodeport.
Kube-proxy Reverse Proxy
Requests that <node Ip>:nodeport either through the cluster internal service Portal <cluster ip>:p ort or through the cluster external service portal are redirected to the local kube-proxy port (random port) mapping. The access to this Kube-proxy port is then given to the proxy to the remote real pod address.
An example
An additional example
Kubernetes in port, target port, node port, and Kube-proxy proxy