Liping Mao was published on:
Copyright Disclaimer: You can reprint the document at will. During reprinting, you must mark the original source and author information of the article in the form of a hyperlink and this copyright statement.
In openstack, l3router may cause traffic concentration problems. Vrouters of network nodes are required for both east-west traffic and north-south traffic. In order to solve the traffic concentration problem, the Community is launching the feature of the distributed virtual router (DVR. This article focuses on the processing process of East-West traffic in the DVR. The handling of the north and south directions is not within the scope of this article.
First, let's take a look at the problems with East-West traffic. A user creates a vroute1 (on the network node) and two virtual networks net1 and net2, and then starts a VM on the two networks respectively, assume that the two VMS are on Compute node1 and compute node2. We can see that when VM1 wants to communicate with VM2, the data needs to be centralized to the network node, resulting in east-west traffic concentration. As shown in:
In order to solve this problem, a DVR is introduced, and the East-West traffic is distributed across various computing nodes to achieve real multi-host. To analyze Packet Flow, make the following assumptions: 1. VM1 and VM2 are two virtual machines belonging to net1 and net2. they are on Compute node1 and compute node2. 2. net1 and net2 are connected to vrouter. 3. The connection mode between compute node1 and compute node2 is VLAN. 4. VM1 and VM2 use fixed IP to communicate with each other, and do not involve floating IP addresses. 5. When using DVRs, an IR (internal router) is established on each compute node, assuming that the interfaces connecting net1 and net2 are qr-net1 and qr-net2.
Topology:
To enable a DVR, you need to install the neutron-l3-agent on the compute node and enable the DVR mode. You also need to change neutron-openvswitch-agent to DVR mode:
Let's take sending a packet from VM1 to VM2 as an example to analyze the data stream of the East-West packet. When a package is issued from VM1, a package in the following format is issued because the default gateway is a qr-net1:
When the package is routed to the Br-int, it is forwarded to the qr-net1 so that it enters the internal router1 of compute node1. Find the route in ir1 and find that the target address belongs to net2. The ARP table in ir1 contains static ARP entries of all VMS. Therefore, if the target address is VM2, an ARP entry exists and no ARP request is sent. ARP tables are maintained in the neutron-l3-agent. This ARP table is modified when a VM is added or deleted. The package is forwarded from the qr-net2 interface. The format is as follows:
When the package flows into the Br-int, it is forwarded to the br-eth0. The br-eth0 changes the VLAN of the package to an external VLAN, And the openflow rule changes the source MAC address to a unique MAC address bound to computenode. The unique MAC address is generated by the DVR. At the same time there is also a rule on the br-eth0 that blocks ARP requests to the qr-net1 and qr-net2, so that the native VM can use the native internal router.
Notes: Why is the unique MAC bound to the compute node required? The main reason is that each compute node has ir1, while the qr-net1 and qr-net2 interface IP addresses are the same as the MAC address. If the source MAC address is not modified, ovs and external physical switches on each compute node will receive packets with the same source MAC address from different ports, which will cause the switch MAC Address Table thrashing. Although different VLAN IDs may occur even when a unique MAC is used, the MAC address is the same, but this affects a lot.
When the packet is sent from compute node1, the PHY switch forwards it to compute node2, converts the external VLAN to the internal vlan on the br-eth0, and then forwards it to the Br-int, use the new feature of openvswitch2.1 on the Br-int, use "group tables" to change the source MAC address to the MAC address of the qr-net2, and forward it to all ports of net2, VM2 can receive the request packet. Openflow rule should be similar to the following: dl_vlan = net2localvlanid, nw_dst = net2iprange, actions: strip_vlan, mod_dl_src = qr-net2 Mac, output-> all the port in net2
Refer: https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvrhttps://wiki.openstack.org/wiki/Neutron/DVR_L2_Agenthttps://review.openstack.org/#/q/topic:bp/neutron-ovs-dvr,n,z