The first two posts examine the network architecture of the compute node and the neutron node respectively. This paper analyzes the process of specific network process through some typical process cases.
0. Environment
With learning OpenStack (7): Neutron in-depth learning OVS + the environment used in the Neutron node of GRE.
Briefly summarize:
The Compute node is responsible for the neutron-ovs-agent:
- Br-int: Each virtual machine is connected to the OvS Bridge via a Linux Brige
- Br-tun: VLAN ID and tunnel ID in the conversion network packet
- GRE tunnel: Virtual GRE Channel
On the neutron node:
- Br-tun/br-int: With compute node, neutron-ovs-agent is responsible for
- Br-ex: Connecting the physical network card for communication with the external network
- Network namespace: qdhcp for tenant Network DHCP service is owned by Neutron-dhcp-agent and routing for Inter-network qrouter is responsible for neutron-l3-agent
2. Several typical process cases2.1Process 1: Communication between virtual machines in the same subnet on the same host
Because Br-int is a virtual two-layer switch, communication between virtual machines in the same subnet on the same host only passes through the Br-int Bridge and does not need to go through the Br-tun Bridge. As shown in the Red line:
2.2 Process 2: Communication between virtual machines in the same subnet on different hosts
Process:
1. From the left of the virtual Machine 1 packet, through the Linux bridge to reach Br-int, was tagged VLAN ID Tag
2. Reach Br-tun, convert VLAN ID to tunnel ID, send from GRE tunnel, and reach another compute node
3. On another compute node, go through the opposite process and reach the virtual machine on the right.
Note: This configuration will be verified shortly after the experiment.
2.3 Process 3: Virtual machine Access Extranet
1. Packet leave the virtual machine, go through the Linux Bridge, reach Br-int, and hit the VLAN ID Tag
2. Reach Br-tun, convert VLAN ID to tunnel ID
3. Enter the GRE channel from the physical network card
4. The NIC that reaches the Neutron node from the GRE channel
5. Achieve the Br-tun connected to the physical NIC and convert the tunnel ID to VLAN ID
6. Reach Br-int, then reach router,router NAT table convert fixed IP address to floatiing IP address, then route to Br-ex
7. Go out to the extranet from the Br-ex connected physical network card
The external IP access virtual machine is a reverse process.
2.4 Process 4: Virtual machines Send DHCP requests
Process:
1. Packet, Br-int, Br-tun, tunnel, eth2------>eth2->br-tun->br-int->qdhcp, virtual machine
2. QDHCP returns its fixed IP address, the original path returns
For example: During the startup of a virtual machine (IP 10.0.22.202), the request received by DHCP Server (10.0.22.201) and its reply:
[Email protected]:/home/s1# ip netns exec qdhcp-d24963da-5221-481e-adf5- fe033d6e0b4e tcpdump Listening on tap15865c29-9b65535//dnsmasq listening on this tap device
07:16:56.686349 IP (Tos 0x0, TTL-up, id 41569, offset 0, flags [DF], Proto UDP (+), length 287)
10.0.22.202.bootpc > 10.0.22.201.bootps: [udp sum OK] bootp/dhcp, Request from fa:16:3e:19:65:62 (oui Unknown), length 259, XID 0xab1b9011, secs 118, Flags [None] (0x0000)
Client-ip 10.0.22.202 //virtual machine eth0 IP address
Client-ethernet-address fa:16:3e:19:65:62 (Oui Unknown)
vendor-rfc1048 Extensions
Magic Cookie 0x63825363
Dhcp-message Option, Length 1:release
Client-id Option, length 7:ether fa:16:3e:19:65:62 //virtual machine eth0 MAC address
Server-id Option, length 4:10.0.22.201 //dhcp Server IP address
2.5 communication between virtual machines in different tenant
The Neutron Tenant Network is a communication between virtual machines in the tenant. If you need to communicate between VMs in different tenant, you need to increase the neutron route between the two subnet.
3. Some quick conclusions on Gre/ovs/neutron 1. GRE can isolate broadcast storms, do not need the switch configuration chunk port, solve the number of VLAN ID limit, 3-layer tunneling technology can be implemented across the room deployment, but it is point-to-point technology, every two points need to have a tunnel, for the 4-tier port resources is a waste, and in the IP headerincrease the tunnel ID, bound to reduce the MTU value of the VM, the same size of data, need more IP packet transmission, transfer efficiency has an impact. 2. OVS: Traffic limits, traffic monitoring, packet analysis can be made for each VM, OpenFlow can be introduced, the control logic is separated from the physical switch, and the SDN controller can achieve Vxlan communication across the computer room, but the possibility can be a potential problem. 3. Advantages of Neutron:(1) Provide rest API(2) Neutron the function of some traditional network management to tenants, through which tenants can create their own virtual networks and their subnets, create routers, etc., with the help of virtual network functions, the basic physical network can provide additional network services to the outside. For example, tenants can create a virtual network of their own that is similar to a data center network. Neutron provides a more complete virtual network model and API in a multi-tenant environment. Like deploying a physical network, you need to do some basic planning and design when creating a virtual network using Neutron. 4. Possible problems with Neutron:(1) Single point of failure: Neutron node as the central control node of network, it is easy to lead to single point of failure. Ha in a production environment should be required. (2) Reduced performance: network traffic through too many levels, latency increase. (3) Scalability is not enough: when the compute node increases rapidly, the neutron node also needs to be extended.
Explore OpenStack (8): Neutron in-depth exploration of OVS + GRE's complete network process