"I would like to introduce virtualization technology in a step-by-step manner, but the recent work in analyzing open vswitch technology, I want to take advantage of the memory of the openvswitch of the understanding of the detailed summary down"
This article summarizes the documentation provided by Openvswitch website openvswitch. org, as well as other relevant materials.
Open Vswitch Overall Overview:
> Apache 2.0 protocol.
> Software-only multilayer virtualization switches.
> Support OpenFlow Protocol
> Support for multiple hypervisor (XEN, KVM and other mainstream hypervisor)
> supports the following features
* Standard 802.1Q VLAN model with trunks and access ports
* NIC bonding with or without LACP on upstream switch
* N Etflow, SFlow (R), and mirroring for increased visibility
* QoS (Quality of Service) configuration, plus policing
* GRE, GRE over IPSEC, VXLAN, and LISP tunneling
* 802.1AG Connectivity fault Management
* OpenFlow 1.0 plus Numero US Extensions
* Transactional configuration database with C and Python bindings
* high-performance Forwarding Usin G A Linux kernel module
This introduction, may have seen no concept, and then from the virtual computing environment of the network structure from the beginning of the introduction. This is useful for understanding open vswitch.
Before the computing virtualization technology is popularized, the NIC is the exit of host connected to the switch, which is processed by the physical switch according to the message forwarding rules after connecting to the physical switch. After the host host deploys the virtualization technology, multiple virtual machines are allowed to run concurrently, but only one (or less than the virtual machines) of the network card is introduced, thus introducing network card virtualization. Back in the late 90, Linux had introduced bridge technology to implement virtual network cards.
In Linux, TAP + bridge is used more widely in network card virtualization. Structure as shown:
The implementation of this scheme is relatively simple, but the problem is also obvious: Bridge itself lacks the ability of flow control and network management. One obvious problem is that communication between virtual machines within the same host can be done directly through memory exchange, without having to go through the network. For the network manager, this part of the traffic becomes invisible. This becomes more apparent as the number of virtual machines deployed in the data center increases.
The new virtual switch solves the problem of visualization of internal traffic, and strengthens the characteristics of flow control, network function and QoS. The more representative Virtual Switch technologies are: VMware vswitch, Cisco Nexus 1000v and open Vswtch. At the same time, this type of switch also supports centralized management. Centralized management allows virtual switches deployed on many hosts to be distributed and managed. In addition, some hardened NIC cards also support the acceleration of the open vswitch, such as TCP shard processing acceleration, checksum, and even some network cards integrating the L2 switch into the NIC, which greatly accelerates the message forwarding rate of the open vswitch.
The structure of the open vswitch is shown in the following figure:
The above figure is the deployment structure of the open vswitch on Xen, although slightly different from the KVM, but basically consistent. The biggest difference in Xen is that the Ovs-mod module is deployed in Dom0, and KVM is not in the kernel state of the host OS. In the figure above, Hypervisor assigns a virtual NIC to each virtual machine vif,vif connects to the DOM0 Express forwarding module OVS-MOD. In addition, a series of management procedures have been deployed in Dom0 's user configuration, the core of which is ovs-vswitchd. Ovs-vswitchd can accept remote control commands, such as commands for OpenFlow controllers.
After the above introduction, in detail should have a basic understanding of open vswitch, summarized as follows:
1. Software virtual machines deployed within hypervisor for virtual networks
2. The problem of traffic visibility missing by traditional virtual bridge is solved, and the ability of network management and flow control are enhanced.
3. Support for distributed deployment architecture
4. Support remote Management
5. Support multiple hypervisor
6. Speed-up of compatible hardware NIC
Below the detailed analysis under the open vswitch important components and software structure, the overall structure as shown in the figure below:
"such as quoted from Netizens blog: http://chenpiaoping.blog.51cto.com/5631143/1143097"
The diagram above is a good illustration of the OvS software architecture, first to look at the components deployed in the user state:
1) The user state deploys a series of daemons, the most important of which is ovs-vswitchd and Ovsdb-server.
2) Ovs-vswitchd is the most complex part of open Vswtich and is also the core component, also known as the slow path. This component is responsible for communicating with the remote manager, such as communicating with the OpenFlow controller through the OpenFlow protocol and communicating with Sflowtrend with the Sflow protocol. The remote controller issued flow control rules, flow table items and so on. In addition, the component is responsible for communicating with the OvS fast path deployed in the kernel state, issuing specific rules and actions to OvS DataPath, which communicates through NetLink protocol.
3) ovsdb-server storage configuration data, ovs-vswitchd through the socket with Ovsdb-server communication, read or write configuration data.
4) In addition to the above core modules, there are a number of management tools that provide some enhancement services around the core functions described above. Not to repeat.
The kernel state of the OvS is relatively simple, and only the specific datapath is deployed, which is responsible for the actual message forwarding processing.
It is worth noting that in order to improve the performance of virtual network card, the industry has a lot of virtual machine pass-through network card, more typical is the Sriov network card. This type of network card because the virtual machine can directly access the hardware, so in the data path OvS is not involved, OvS on this type of network card, the role of more close to network management tools.
At present, there are some solutions for OvS acceleration, such as the Intel DPDK OvS, Netmap EVAL, 6windGate, and so on, the next step is to expand the OvS acceleration technology.
Today's sauce.