Have not written, through their own learning some of the sentiment, record down, expect to learn together small partners to maintain growth
Straight-Poke Theme
Kubernetes The concept of service in the network, the realization principle through the proxy Process on node nodes call iptables for network equalization simply said that each node has the same iptables rules to help you rotate to the backend This is a bit like the LVs NAT mode on the pod.
Besides, kubernetes access to the service from the pod. The same rule is done with iptables, and node access is the same
Focus, from external access to official offers 2 ways
A, using a port to the native via Iptalbes to map the access to the service (each machine is mapped) Nodeport mode
B, by official LB but currently this lb only supports Google's and AWS (normal users cannot access)
That's the problem.
How do we access the inside service or pod from outside?
Method 1, directly from the ETCD inside call back-end pod address implementation now most of the students are this way
Method 2, access the service directly from outside (what I do here is the direct-connect route mode)
5 Sets of machines
Master 10.1.11.250 Gateway 10.1.11.254
Add 2 routes 10.1.51.0/24 GW 10.1.11.1 (direct learning via OSPF software quagga)
10.1.52.0/24 GW 10.1.11.2
Node1 10.1.11.1 Gateway 10.1.11.254 Docker Network 10.1.51.1 an additional virtual NIC 10.1.200.253
10.1.52.0/24 10.1.11.2 (quagga)
Node2 10.1.11.2 Gateway 10.1.11.254 Docker Network 10.1.52.1 an additional virtual NIC 10.1.200.253
10.1.51.0/24 10.1.11.1 (quagga)
Gateway Router 10.1.11.254 External network 10.1.10.1 (quagga)
10.1.51.0/24 GW 10.1.11.1
10.1.52.0/24 GW 10.1.11.2
Client Address 10.1.10.200
Add static route 10.1.200.0 GW 10.1.10.1
Route 10.1.11.0 GW 10.1.10.1
Kubernetes Virtual Network 10.1.200.0/24
Through the above settings the router reaches 10.1.200.0/24 nexttop 10.1.11.1
Nexttop 10.1.12.1
Simple rotation distribution and guaranteed session consistency via session cache
So I use the session to release the service 10.1.200.200 5000 port when the route to 10.1.11.1 or 11.2 because proxy has 5000 ports in advance Dnat so access is directly dispatched to the back-end pod (serv Ice Google itself is doing load balancing)
Implementing the equivalent route by placing routers above accesses different node1 to implement the load.
Text Foundation, simple to say here, Google Service feel its implementation and LVS Fullnat almost take the same direct-connect routing scheme.
This article is from the "Slow Walk" blog, please be sure to keep this source http://byebye758.blog.51cto.com/1041315/1753636
One kubernetes direct-attached route OSPF equivalent route