The rise of SDN
Traditional network management mode is slow, often through the command line interface, manual configuration, high operating costs, network upgrade time is long, lack of flexibility, response to demand changes slow, and prone to errors.
In July 2012, SDN represented the maker Nicira was acquired by VMware for $1.26 billion, and Google announced its success in deploying SDN in its global network of 10 IDC networks, prompting SDN to attract strong industry attention. SDN Architecture
Software-defined networks (software Defined Network, SDN), a new network innovation architecture of Emulex Network, is a way of realizing network virtualization, and its core technology openflow by separating the control surface of network equipment from the data plane. Thus, the flexible control of network traffic is realized, so that the network becomes more intelligent as a pipeline.
SDN features control and forwarding separation. The forwarding plane consists of a controlled forwarded device, and the forwarding mode and the business logic are controlled by the control application running on the separated control surface. The open interface between the control plane and the forwarding plane. Provides an open programmable interface for the control plane. Control applications only need to focus on their own logic, without needing to focus on more implementation details at the bottom. Logical, centralized control. The logical centralized control plane can control multiple forwarding surface devices, that is, control the entire physical network, so it can obtain a global view of the network state, and based on the global network state view to achieve optimal control of the network. The SDN Advantage hardware focuses only on forwarding and storage capabilities, decoupled from business features, and can be implemented using a relatively inexpensive, commercially available architecture. The type and function of the network equipment is determined by the software configuration, and the operation control and operation of the network is done by the server as the network operating system (NET OS). Relatively faster response to the business, you can customize a variety of network parameters, such as routing, Security, policy, QoS and so on, and in real-time configuration into the network, the opening of specific business time will be shortened. OpenFlow
The OpenFlow is the standard communication interface for the control plane and the forwarding plane in the SDN architecture. OpenFlow allows for direct access and operation of network devices with physical and virtual forwarding planes. The OpenFlow-based SDN architecture enables it to respond to today's high-bandwidth, dynamic applications. Adapt to the changing business needs of the network and significantly reduce the complexity of operations and management.
OpenFlow Specification official Address: Https://www.opennetworking.org/sdn-resources/technical-library
Programmable. Accelerate innovation and accelerate the development of new features and services. Centralized intelligence. Simplifies initial configuration, optimizes performance, and fine-grained policy management. Abstract. Includes decoupling of hardware and software, decoupling of control plane and forwarding plane, and decoupling of physical and logical configuration. Opendaylight
Opendaylight is a community-led open source framework designed to drive innovation implementation and transparency of software-defined networks (SDN). In the face of SDN networks, you need the right tools to help you manage your infrastructure, which is what Opendaylight's expertise is. As the core of the project, Opendaylight has a modular, pluggable and extremely flexible controller that enables it to be deployed on any Java-enabled platform. The controller also includes a set of modules to perform network tasks that need to be completed quickly.
Performance and scalability. Support for cluster mode, OpenStack HA, enables workloads to be deployed conveniently on DPDK accelerated virtual switches. Ease of use. Integration capabilities, and a more user-friendly UI. Abstract network model. Supports 4 methods, NEMO, Alto,gbp,nic.
Wide range of Use cases Open vSwitch
The Open VSwitch uses the Apache 2.0 license to produce quality-grade, multi-tiered virtual switches. Designed to support large-scale network automation, it also supports standard management interfaces and protocols. Includes NetFlow, SFlow, Ipfix,rspan, CLI, LACP, 802.1ag. Features similar to VMware's DVS and Cisco 1000v. Supports a variety of virtualization platforms (XEN,KVM, Proxmox VE and VirtualBox) and switch chips. Supports a variety of virtualization management platforms (OPENSTACK,OPENQRM, Opennebula and OVirt).
Complete list of features:
OVS Deployment
This shows the installation of the OVS 2.5.0 version of the CentOS 7.2 x86_64 platform.
Prepare the system environment
Yum update-y
Yum groupinstall-y "Development Tools"
yum install-y wget openssl-devel kernel-devel kernel-debug- Devel tcpdump net-tools gcc make python-devel openssl-devel graphviz autoconf automake rpm-build redhat-rpm-config libtool
sed-i ' s/=enforcing/=disabled/g ' /etc/selinux/config
systemctl Disable Firewalld.service
Systemctl Disable Irqbalance.service
reboot
Create OvS user, compile RPM package
OvS
adduser OvS
Su-ovs
mkdir-p ~/rpmbuild/sources
cd ~/rpmbuild/sources/
wget/http/ openvswitch.org/releases/openvswitch-2.5.0.tar.gz
tar xvf openvswitch-2.5.0.tar.gz
rpmbuild-ba ~/rpmbuild /sources/openvswitch-2.5.0/rhel/openvswitch.spec
exit
Installing RPM Packages
Yum localinstall/home/ovs/rpmbuild/rpms/x86_64/openvswitch-2.5.0-1.x86_64.rpm
Validating and starting Services
Ovs-vsctl-v
Systemctl Enable Openvswitch
systemctl start openvswitch
systemctl status Openvswitch
This completes the OvS installation. The following experimental topology is mainly the official map, the difference is:
eno16777728 is used to manage traffic relative to eth0;
eno33554968 equivalent to eth1, used to do data traffic. isolating networks using VLANs
First, we verify the support for the 802.1Q VLAN.
The experimental topology is as above, the eno16777728 is the management network, the eno33554968 is the data network.
Create a OvS Bridge
Ovs-vsctl ADD-BR Br0
Add the NIC eno33554968 to the bridge BR0
Ovs-vsctl Add-port br0 eno33554968
Create 2 VLANs
Ovs-vsctl add-port br0 tap100 tag=100--set interface tap100 type=internal
ovs-vsctl add-port br0 tap200 tag=200--S ET interface tap200 type=internal
Because of this experiment in VMware Workstation, there is no physical switch to configure the trunk.
* When the host11 above virtual machine VM1 uses tap100 (vlan100), host11 above the virtual machine vm2 use tap200, although VM1 and vm2 are used on the same network segment, but cannot ping vm1.
* When the VM2 switch to tap100 (vlan100), the normal ping through VM1 can be
* When the VM1 switch to br0,host12 above the virtual machine vm21 also use br0, that is, do not configure the VLAN, you can normally ping vm1.
Add the trunk settings of the physical interface
Set trunk Allow vlan100,vlan200
Ovs-vsctl Set Port eno33554968 trunk=100,200
Remove the internal Port VLAN configuration
Ovs-vsctl Remove Port tap100 tag 100
Remove Trunk Settings
Ovs-vsctl Remove Port eno33554968 trunk 100,200
Packet capture analysis, the current configuration is host11 above the virtual machine vm1 use vlan100,vm2 use vlan200.
Tcpdump-i macvtap0-w/tmp/vlan2016091600901.pcap
You can see the standard 802.1Q VLAN tag:
From this you can verify that the Open VSwitch supports 802.1Q VLAN functionality. monitor traffic with Sflow
The SFlow is an industry standard for monitoring high-speed switching networks. It provides complete visibility into network usage, optimizes network performance, metering and billing, and security defenses. Official:
This experiment uses Sflowtrend to monitor. Sflowtrend is free, graphics-based network and server monitoring features. Use the Sflow standard to provide full user and application network bandwidth usage.
Configure Sflow
collector_ip=10.0.0.2
collector_port=6343
agent_ip=eno16777728
header_bytes=128
SAMPLING_N=64
polling_secs=10
ovs-vsctl----id= @sflow create sflow agent=${agent_ip} target=\ "${collector_ip}:${ Collector_port}\ " header=${header_bytes} Sampling=${sampling_n} Polling=${polling_secs}--Set bridge br0 sflow=@ Sflow
View Sflow Configuration
In the experiment, install Sflowtrend on windows and turn off the firewall. The OvS interface is as follows:
The interfaces monitored by the sflowtrend are as follows:
Bandwidth utilization is as follows:
We analyze the NIC eno16777728 grab packet,
Tcpdump-ni eno16777728-w/tmp/sflow-201609160950.pcap
After the experiment is complete, delete the Sflow settings
Ovs-vsctl Remove bridge br0 sflow UUID
From this you can verify that the Open VSwitch supports sflow monitoring functions. use QoS policy speed limit
Speed limit is a feature that we often use in VMS or container environments. We can modify the ingress policy rules for the interface table configuration. There are two value settings:
Ingress_policing_rate: The maximum rate (Kbps) that is allowed to be sent.
Ingress_policing_burst: The interface sends a maximum floating value (Kbps) that can exceed the maximum rate.
Here we create two ports of Tap0 and TAP1
Set VM1 speed limit to 1 Mbps
Ovs-vsctl add-port br0 tap0 --set interface tap0 type=internal
ovs-vsctl set interface tap0 Ingress_policing_rat e=1000
Ovs-vsctl set interface tap0 ingress_policing_burst=100
Set vm2 speed limit to 10Mbps
Ovs-vsctl add-port br0 tap1 --set interface Tap1 type=internal
ovs-vsctl set interface Tap1 Ingress_policing_rat e=10000
Ovs-vsctl set interface Tap1 ingress_policing_burst=1000
View Speed Limit Configuration
Test Tool Netperf: Open-source network performance testing tools. Can be installed on AIX and Linux platforms to support cross-platform use. Netperf can test TCP network performance, UDP network performance, and can simulate client/server long connection or short connection scene, so it can be more close to the actual network environment for testing and evaluation.
VM1 IP address is 10.0.1.11, VM2 IP address is 10.0.1.12, start vm2 on VM1 and NetServer
NETSERVER-D-P 8888
Start Netperf on the VM21
Test VM1
Netperf-h 10.0.1.11-p 8888-l 60
Test VM2
Netperf-h 10.0.1.12-p 8888-l 60
The test results are as follows, consistent with the bandwidth set.
The throughput with no speed limit are as follows:
The throughput after the speed limit are as follows:
Packet Capture analysis
Tcpdump-i macvtap0-w/tmp/201609161130.pcap
You can see that after the limit is exceeded, a large number of drops will appear.
After the system drops, it will have a serious impact on the application, before using the speed limit, need to do a reasonable planning.
Basic function, mainly introduces the foundation of SDN software Definition network, open source project, and OvS basic function configuration and verification. The next article describes the GRE and Vxlan Tunneling Protocol configuration for OvS.