OpenStack Mid-DVR PART1-Traffic handling for things

Source: Internet
Author: User

liping Mao Posted: 2014-07-04
Copyright notice: can be reproduced freely. When reproduced, please be sure to indicate the original source and author information and this copyright notice in the form of a hyperlink .

L3router in OpenStack can cause problems with traffic concentration. Both the east-west and north-south traffic need to flow through the network nodes of the virtual router. In order to solve the problem of traffic concentration, the community is playing the feature of Distributed virtual Router (DVR).

This article focuses on the flow of things to the traffic in DVR.

The north-south treatment is not within the scope of this document.

First look at the problem of the flow of things to the traffic. A user creates a VRoute1 (on network node) and two virtual network Net1, Net2, and then has a virtual machine in both networks, if the two VMS are on compute NODE1 and compute Node2 respectively. Be able to see when VM1 wants to communicate with VM2. The data needs to be focused on network Node, resulting in a matter-to-flow concentration problem. For example, as seen in:


In order to solve the problem introduced DVR, the thing to the flow distribution in the various compute nodes to achieve real multi-host.

To analyze packet flow do the following if: 1. VM1 and VM2 as you can see. Are the two virtual machines belonging to Net1 and Net2, respectively, on compute NODE1 and compute Node2.

2. Net1 and Net2 are connected to the Vrouter. 3. Compute Node1 and Compute Node2 are connected by VLAN. 4. VM1 and VM2 use fixed IP communications and do not involve floating IP.

5. When using DVR, an IR (Internal Router) is established on each compute node if the interface connecting Net1 and Net2 is Qr-net1 and Qr-net2.
Topologies for example with:

Enabling DVR requires the installation of neutron-l3-agent on compute node. And to turn on DVR mode. At the same time need to change neutron-openvswitch-agent for DVR mode:


We analyze the data flow to the packet from VM1 to VM2 as an example. When the package is issued from VM1. Because the default gateway is Qr-net1, a package in the following format is emitted:
When the packet flows to the Br-int, it is forwarded to Qr-net1, which then enters the internal Router1 of compute Node1.

Find the route in IR1 and find that the destination address belongs to Net2. In the IR1 arp table, there is a static ARP Entry for all VMs. Therefore, the destination address is VM2 and there is an ARP Entry, and no ARP request is issued. The ARP table is maintained in Neutron-l3-agent.

This ARP table is changed when a virtual machine is added/removed.

The package will be forwarded from the Qr-net2 interface. The format is as follows:


watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvbwf0df9tyw8=/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/center ">

When the packet flows into the br-int, it is forwarded to Br-eth0. Br-eth0 will change the VLAN of the package to an external VLAN, at the same time by OpenFlow rule will change the source Mac to a unique MAC address that is bound to computenode.

This unique MAC address is generated by DVR.

At the same time there is a rule on br-eth0 that blocks ARP requests to Qr-net1 and Qr-net2, which ensures that native VMS use native internal Router.
Notes: Why do you want the only Mac that's bound to compute node? The main reason is that every compute node has IR1. At the same time, both the Qr-net1 and Qr-net2 interface IP addresses and MAC addresses are the same.

If you do not change the source MAC. The OvS on each compute node and the external physical switch receive packets from the same source MAC address from different ports. This causes the switch MAC Address table to be thrashing.

Even though a unique Mac can be used with a different VLAN ID but with the same MAC address, the situation is much smaller.

When the package is emitted from the compute Node1, the Phy switch forwards it to the compute Node2. The external VLAN is converted to an internal VLAN on Br-eth0, and then forwarded to Br-int, with a new feature of OpenVSwitch2.1 on Br-int, and a "Group Tables" to change the source MAC address to a Qr-net2 MAC address and forwarded to Net2 all port,vm2 will be able to receive the request package.

Openflow Rule should resemble the following for example: Dl_vlan = Net2localvlanid, nw_dst = Net2iprange, Actions:strip_vlan, mod_dl_src = Qr-net2 MAC, ou Tput->all the port in Net2

refer:https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvrhttps://wiki.openstack.org/wiki/neutron/ Dvr_l2_agenthttps://review.openstack.org/#/q/topic:bp/neutron-ovs-dvr,n,z
Note: This article calculates the VLAN connection between nodes, and now actually commits the patch if only Vxlan is supported. VLANs will be supported in the future.

OpenStack Mid-DVR PART1-Traffic handling for things

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.