This article is based on the Kubernetes 1.5.2 version to write the configuration method
When the kubernetes version is greater than or equal to 1.2, the external network (that is, the network within the K8s cluster) accesses the cluster IP by:
Change Master's/etc/kubernetes/proxy, kube_proxy_args= "" to Kube_proxy_args= "–proxy-mode=userspace"
Restart the Kube-proxy service
Adds a route to the core routing device or source host, and the route to the cluster IP segment is directed to master.
Add routes directly when the Kubernetes version is less than 1.2
two modes of Kube-proxy forwarding
A simple network agent and load Balancer, responsible for the implementation of the service, each service will be reflected in all the kube-proxy nodes. Specifically, the internal access from Pod to service and external to service from node port is achieved.
Kube-proxy There are two main modes of userspace and iptables when forwarding.
Kuer-proxy currently has two ways to implement userspace and iptables.
Userspace (pictured below) is in user space, through Kuber-proxy to implement LB's proxy service. Before the k8s1.2 version, the default is Kube-proxy, and all forwarding is implemented through Kube-proxy. This is the original version of the Kube-proxy, more stable, but the efficiency is naturally not too high.
Another way is to iptables the way (pictured below). is the pure use of iptables to achieve lb. After the k8s1.2 version, Kube-proxy the default method. All forwarding is implemented through the Iptables kernel module, and Kube-proxy is only responsible for generating the corresponding iptables rules.
Using the userspace mode (the default mode before the K8s version is 1.2), the external network can access the cluster IP directly.
Using the Iptables mode (the default mode after k8s version 1.2), the external network cannot directly access cluster IP. four ways to forward k8s back-end services Clusterip
This type provides a virtual IP within a cluster that is not on the same network segment as the pod and is used for communication between pods within the cluster. Clusterip is also the default type for Kubernetes service.
In order to achieve the functions of the diagram, the following components are required to work together:
Apiserver: When you create a service, Apiserver stores the data in the ETCD after receiving the request.
The process is implemented in each node of the kube-proxy:k8s, which is responsible for implementing the service function, which is responsible for perceiving service,pod changes and writing the changed information to the local iptables.
Iptables: Use NAT technology to transfer virtualip traffic to endpoint. Nodeport
In addition to using cluster IP, the nodeport mode maps the port of the service to a specified internal port on each node, and the internal port of each node in the map is the same.
Exposes a port for each node and can access the service through NODEIP + Nodeport, while the service will still have cluster type Ip+port. The interior is accessed through the Clusterip way, externally through the nodeport way.
LoadBalance
LoadBalancer on a nodeport basis, k8s can request the underlying cloud platform to create a load balancer, each node as the backend, for service distribution. This pattern requires support from the underlying cloud platform (for example, GCE). Ingress
Ingress, an HTTP-style routing forwarding mechanism, is composed of ingress controller and HTTP proxy servers. Ingress controller Real-time monitoring kubernetes API, real-time update HTTP proxy server forwarding rules. HTTP proxy Server has GCE Load-balancer, Haproxy, Nginx and other open source schemes.
For more information please see http://blog.csdn.net/liyingke112/article/details/77066814
three kinds of ports for service Port
Service is exposed to the port on the cluster IP,:p ORT is the portal provided to the cluster internal customers to access the service. Nodeport
Nodeport is a way for k8s to provide access to service portals to external clients of the cluster: Nodeport is a portal that provides access to service to outside clients of the cluster. Targetport
The Targetport is the port on the pod, and the data coming from the port and the Nodeport end up kube-proxy into the back-end pod targetport into the container. Port, Nodeport summary
In general, both port and Nodeport are ports of service, the former exposed to customer access services in the cluster, and the latter exposed to client access services outside the cluster. Data coming from both ports requires a reverse proxy kube-proxy into the targetpod of the back-end pod, thereby reaching the container on the pod.