I. Overview of neutron
As we all know, the entire open stack network is implemented through the neutron component, it has become the most complex part of the entire open stack, this article focuses on neutron implementation model and application scenario, gossip less, into the topic.
1. Architecture of the neutron
The architecture of the neutron is as follows:
Neutron serve is composed of core plugins and service plugins, the native Neutron core plugins uses the ML2 plug-in, it is divided into type-driven and mechanism-driven, can provide the basic network type and implementation mechanism, advanced functions such as XXX And so on through the service plugins implementation, while neutron as an open component, allowing vendors to dock their own plug-in location, the article uses the core plugins ML2 plug-in to explain, The OVS focuses on VLANs and vxlan types of networks.
2. Open Stack deployment model
Taking 3 nodes as an example, the Open stack consists of a control node, a network node, and a compute node that, when the neutron server in the control node receives a request through a restful or CLI, transmits the information to the network and the agent of the compute node via RPC. Agent in command of specific program implementation functions
For example, when Neutron server receives the command to turn on the DHCP function via the CLI, it sends the directive to the DHCP Agent,dhcp agent to implement the DHCP function by DNSMASQ this specific program L3 The agent is implemented by the Linux kernel that has the forwarding function turned on.
3. Linux Network Virtual Foundation
Through the above also learned that neutron itself does not do the implementation of specific functions, here neutron often involved in the virtual network equipment to explain. Virtual network devices are also called virtual networks, unlike physical devices in the real world, a data structure similar to Linux, kernel modules or device drivers can be called a device, for example, a hard disk after the creation of multiple partitions each partition under Linux is a device, through the above concept, leads to the following devices:
(1)Tap Equipment
The TAP device is a two layer virtual network device in the Linux kernel, which corresponds to the Ethernet protocol in the two layer, so it is often referred to as the virtual Ethernet device, which realizes the function of the virtual NIC.
(2)name Space
namespace abbreviation NS, the traditional Linux resources are global, NS is in a host to create a lot of isolated space, invisible to each other, the global resources into a specific NS unique resources, NS can isolate the resources have
Resources |
meaning |
Uts_ns |
UTS is a short name for the Unix timesharing system that contains information about memory names, versions, underlying architectures, and so on |
Ipc_ns |
All information related to process communication (IPC) |
Mnt_ns |
Currently mounted file system |
Pid_ns |
Information about the process PID |
User_ns |
Information on resource quotas |
Net_ns |
Network Information |
Specific to the network perspective, each NS has a separate network protocol stack.
(3) Veth pair
Virtual Ethernet interface, in pairs appear, can be understood as virtual network cable, data from one hair into the other from the other end, connected to different NS or virtual network elements.
(4) Bridge
Virtual Switch, that is, OvS or Linux Bridge in the previous article, please refer to the author's other blog posts for details, we will not repeat them here.
Use an example to illustrate the above concepts, as shown in:
A host host has 4 NS, connected to Vbridge via 1 pairs of veth pair
Create Veth pair Pair
[[email protected] ~]# ip link add tap1 type veth peer name tap1_peer [[email protected] ~]# ip link add tap2 type veth peer name tap2_peer [[email protected] ~]# ip link add tap3 type veth peer name tap3_peer [[email protected] ~]# ip link add tap4 type veth peer name tap4_peer
Create NS
[[email protected] ~]# ip netns add ns1 [[email protected] ~]# ip netns add ns2 [[email protected] ~]# ip netns add ns3 [[email protected] ~]# ip netns add ns4
Move the tap into the corresponding NS
[[email protected] ~]# ip link set tap1 netns ns1 [[email protected] ~]# ip link set tap2 netns ns2 [[email protected] ~]# ip link set tap3 netns ns3 [[email protected] ~]# ip link set tap4 netns ns4
Create Bridge
[[email protected] ~]# yum install bridge-utils.x86_64 -y [[email protected] ~]# brctl addbr br1
Add tap Peer to the corresponding bridge
[[email protected] ~]# brctl addif br1 tap1_peer [[email protected] ~]# brctl addif br1 tap2_peer [[email protected] ~]# brctl addif br1 tap3_peer [[email protected] ~]# brctl addif br1 tap4_peer
Configure the corresponding TAP IP address
[[email protected] ~]# ip netns exec ns1 ip addr add 192.168.10.1/24 dev tap1 [[email protected] ~]# ip netns exec ns2 ip addr add 192.168.10.2/24 dev tap2 [[email protected] ~]# ip netns exec ns3 ip addr add 192.168.10.3/24 dev tap3 [[email protected] ~]# ip netns exec ns4 ip addr add 192.168.10.4/24 dev tap4
Connect bridge and all tap devices up
[[email protected] ~]# ip link set br1 up [[email protected] ~]# ip link set tap2_peer up [[email protected] ~]# ip link set tap2_peer up [[email protected] ~]# ip link set tap3_peer up [[email protected] ~]# ip link set tap4_peer up [[email protected] ~]# ip netns exec ns1 ip link set tap1 up [[email protected] ~]# ip netns exec ns2 ip link set tap2 up [[email protected] ~]# ip netns exec ns3 ip link set tap3 up [[email protected] ~]# ip netns exec ns4 ip link set tap4 up
Validation results
[[email protected] ~]# ip netns exec ns4 ping 192.168.10.1
Second, the network realization model of neutron
1. Overall network model
Or take the 3 node as an example, at this point the network model is as follows:
By knowing that, under the native open stack, VMS in all compute nodes must go through the network node if they need to access the extranet, and each 1 tenants have their own DHCP and router, isolated through NS. The external network needs special emphasis: the external network in neutron is the network that it cannot manage, not necessarily the public network.
2. Compute node Implementation Model
2.1 VLAN Implementation Model
This section focuses on the implementation model of the compute node, the concept of internal and external conversion of the data message in neutron, for example: Assuming that the vm2-1 of the vm1-1 and HOST2 nodes of the Host1 node belong to VLAN10, Neutron uses the VLAN network type at this time, The flow of communication between them is:
(1) Vm1-1 sends a pure message (the VM can accept and send the thermal insulation with vid, described in the following article) to QBR-XX. QBR-XX is a Linux bridge device that connects with Vm1-1 through a tap device, and they actually have only 1 tap devices, which can be understood as a tap device half on a BR on a VM, where 2 taps are drawn for ease of understanding. The role of QBR-XX is to apply security features, native OvS does not support security features (Stateful OpenFlow already supported), and QBR-XX is 1:1 corresponding to the number of VMS.
(2) When the message enters the Br-int interface, Vid10,br-int manages the local network layer, and there are only 1 br-int on each compute and network node.
(3) The message leaves Br-int, at this time the vid is 10, at Br-ethx place vid conversion to 100, through Br-ethx left Host1. BR-ETHX manages the Tenant network layer, which is the network that the tenant creates.
(4) The message passes through the physical switch to reach the Host2, enters the Br-ethx, at the BR-ETHX exit vid from 100 converts to 10.
(5) The message outflow Br-ethx exit, enter Br-int, at this time vid is 10, in the exit Br-int export UNTAG, with a pure Ethernet message into qbr-xx last into vm1-2.
2.2 Vxlan Implementation Model
If the neutron network type is Vxlan, he is broadly similar to the VLAN process, except that the vid conversion is not done, instead the VLAN is encapsulated as Vxlan:
Both Br-tun and BR-ETHX are implemented by OvS, unlike the normal two-layer switch functions performed by BR-ETHX, Br-tun performs Vxlan functions in Vtep, and 2 IP is vxlan tunnel end IP.
3. Network Node Implementation Model
From the network point of view, the network node is divided into 4 layers, the first 2 layers and compute nodes almost the same, no longer repeat, Network Service Layer 1 Network 1 DHCP service (through the DNS MASQ program implementation), router by the forwarding function of the Linux kernel implementation, Snat and Dnat functions are provided, each of the 1 DHCP and router runs in NS. The Br-ex of the external network layer is generally also selected OvS, which is bound to the IP address of the FIP, providing Dnat functionality for the internal VMS.
Iii. Questions arising
Some of the following questions may arise from the introduction of the above:
1. Unlike Vxlan's re-encapsulation, why VLANs have 2 OvS (Br-int and BR-ETHX) and must undergo 1 vid conversions.
2. Whether it is vid (internal) to vid (tenant) or vid to Vni, how their correspondence is established.
Further explanations of these two issues are described below.
the meaning of the 1.VID conversion
The previous article learned that each network and compute nodes have only 1 br-int, the internal network is self-maintained by neutron, while the open stack allows tenants to have multiple types of network, such as the same VLAN and Vxlan used by tenants, If the VLAN network type does not have the BR-ETHX, the tenant created VNI100 according to the algorithm to convert the vid is also 10, so that the vid will crash on the Br-int, so any type of network needs to be converted so that neutron can control the overall situation. Another 1 points to note is that, regardless of the type of network your tenant network layer uses, the local network layer can only be 1 network types: vlan!
Network Type |
Br-int |
Br-tun |
Vlan |
10 |
---- |
VXLAN |
10 |
100 |
2. Establishment of conversion relationships
As previously known, each OVS is created by the OVS agent, and the OVS agent stores the mapping of the internal and external vid (VNI) in the OVS field in Other_config Bridge's port table to complete the conversion.
Implementation model of neutron in OpenStack