Currently, Neutron has a QoS proposal (https://wiki.openstack.org/wiki/Neutron/QoS#Documents), but only the CISCSO and NVP plug-ins implement QoS features, Other plugins are not yet implemented. Therefore, if you want to do network QoS in neutron, there will be additional costs.
First, realize network QoS based on OvS
The front proposal design and interface are all available to implement QoS features on their own:
1. Create Qos-rules database, write QoS rules, primary key qos_id
2. Create the Qos-port-binding database and record the port_id and qos_id binding relationships.
3. When the virtual machine is created, Nova calls Quantum exposed API to write the binding relationship to the database.
4. Ovs-agent obtains QoS rules to ovs-plugin through the remote Call function (parameter port_id).
5. Ovs-agent the rules on the interface.
For example, the QoS of OvS can be implemented with the following command
defSet_interface_qos (self, interface, rate, burst): Ingress_policing_rate="ingress_policing_rate=%s"%Rate Ingress_policing_burst="ingress_policing_burst=%s"%Burst args= ["Set","Interface", interface, Ingress_policing_rate, Ingress_policing_burst] Self.run_vsctl (args)defClear_interface_qos (Self, Interface): Ingress_policing_rate="ingress_policing_rate=0"Ingress_policing_burst="ingress_policing_burst=0"args= ["Set","Interface", interface, Ingress_policing_rate, Ingress_policing_burst] Self.run_vsctl (args)
See the following articles for specific implementations:
http://blog.csdn.net/spch2008/article/details/9279445
http://blog.csdn.net/spch2008/article/details/9281947
http://blog.csdn.net/spch2008/article/details/9281779
http://blog.csdn.net/spch2008/article/details/9283561
http://blog.csdn.net/spch2008/article/details/9283627
http://blog.csdn.net/spch2008/article/details/9283927
http://blog.csdn.net/spch2008/article/details/9287311
Second, use instance Resource Quota
How to set Server cpu,disk io,bandwidth consumption limit for instances using the new feature of Nova. By using Cgroup,libvirt can set the per instance CPU time consumption percent. And the instances ' s read_iops,read_byteps, write_iops,write_byteps.also libvirt support limit the instances In/out Bandwi Dth. (Https://wiki.openstack.org/wiki/InstanceResourceQuota)
Bandwidth Params:vif_inbound_average,vif_inbound_peak,vif_inbound_burst,vif_outbound_average,vif_outbound_peak, Vif_outbound_burst
Incoming and outgoing traffic can be shaped independently. The bandwidth element can has at most one inbound and at most one outbound child elements. Leaving any of the these children element out result in the no QoS applied on that traffic direction. So if you want to shape only network's incoming traffic, use inbound only, and vice versa. Each of these elements has one mandatory attribute average. IT specifies average bit rate on interface being shaped. Then there is optional attributes:peak, which specifies maximum rate at which bridge can send data, and burst, Amoun T of bytes that can is burst at peak speed. Accepted values for attributes is integer numbers, the units for average and peak attributes is kilobytes per second, an D for the burst just kilobytes. The rate is gkfx equally within domains connected to the network.
Config Bandwidth limit for instance network traffic
Nova-manage Flavor Set_key--name m1.small --key quota:vif_inbound_average--value 10240nova-manage Flavor Set_key- -name m1.small --key quota:vif_outbound_average--value 10240or using python-novaclient with admin Credentialsnova Flavor-key m1.small Set Quota:vif_inbound_average=10240nova flavor-key m1.small set Quota:vif_outbound_ average=10240
The network QoS here is implemented directly using the parameters provided by Libvirt: (http://www.libvirt.org/formatnetwork.html)
... <forward mode= ' Nat ' dev= ' eth0 '/> <bandwidth> <inbound average= ' "peak= ' burst= ' 5120 '/> <outbound average= ' $ ' peak= ' [] burst= ' "/> </bandwidth> ...
The <bandwidth>
element allows setting quality of service for a particular network (since 0.9.4). Setting for bandwidth
A network are supported only for networks with a <forward>
mode route
of, nat
, or no mode at all (i.e. a " Isolated "network). Setting bandwidth
isn't supported for forward modes of bridge
,, passthrough
private
, or hostdev
. Attempts to does this would lead to a failure to define the network or to create a transient network.
The <bandwidth>
element can-only is a subelement of a domain ' s <interface>
, a subelement <network>
of a, or a subelement of a in <portgroup>
A <network>
.
As a subelement of a domain ' s <interface>
, the bandwidth only applies to that one interface of the D Omain. As a subelement of a <network>
, the bandwidth is a total aggregate bandwidth To/from all guest INTERFAC Es attached to the network, not to each guest interface individually. If a domain ' s <interface>
has <bandwidth>
element values higher than the aggregate F or the entire network, then the aggregate bandwidth for the <network>
takes precedence. This is because the choke points was independent of each other where the domain ' s <interface>
BANDW Idth control is applied on the interface's tap device, while the <network>
bandwidth control is applied On the interface part of the bridge, device created for that network.
As a subelement of a <portgroup>
in a <network>
, if a domain ' s <interfac E>
have a portgroup
attribute in its <source>
element and if th e <interface>
itself have no <bandwidth>
element, then the <bandwidth>
element of the PortGroup is applied individually to each guest interface defined to is a member of that PORTG Roup. Any <bandwidth>
element in the domain ' s <interface>
definition would override the SE Tting in the PortGroup (since 1.0.1).
Incoming and outgoing traffic can be shaped independently. The element can has at most one and at most one child bandwidth
inbound
outbound
element. Leaving either of these children elements out results on no QoS applied for that traffic direction. So, if you want to shape only incoming traffic, use only inbound
, and vice versa. Each of these elements has one mandatory attribute- average
(or as floor
described below). The attributes is as a follows, where accepted values for each attribute was an integer number.
-
-
average
-
-
Specifies the desired average bit rate for the interface being shaped (in Kilobytes/second).
-
-
peak
-
-
Optional attribute which specifies the maximum rate at which the bridge can send data (in Kilobytes/second). Note the limitation of implementation:this attribute in the
outbound
element is ignored (as Linux ingress filters don ' t kn ow it yet).
-
-
burst
-
-
Optional attribute which specifies the amount of kilobytes that can is transmitted in a single burst at
peak
Spee D.
-
-
floor
-
-
Optional attribute available only for the
inbound
element. This attribute guarantees minimal throughput for shaped interfaces. This, however, requires-traffic goes through one point where QoS decisions can take place, hence what this Attribu TE works only for virtual networks-now (so is
<interface-type= ' network '/>
with a forward type O F route, Nat, or no forward at all). Moreover, the virtual network The interface is connected to be required to has the QoS set at least inbound (
average< /code> at least). If using the floor
attribute users, don ' t need to specify average
. However, Peak
and burst
attributes still require average
. Currently, the Linux kernel doesn ' t allow ingress Qdiscs to has any classes therefore floor
can be applied Only on inbound
and not outbound
.
Attributes average
, peak
and was burst
available since 0.9.4, while the floor
attribute was available since 1.0.1.
While network QoS in Libvirt is actually based on TC, it is easy to find the final TC configuration using the tc-s-D qdisc .
This method requires that the virtual machine is libvirt-based and that the operating system on the virtual machine and the network-related server should support Linux Advanced Routing & Traffic Control.
third, network QoS based on TC
This method is actually the combination of the above two methods, that is, or in the neutron to open up the provision of QoS interface, but the OvS ingress_policing_rate, such as the TC to achieve.
The use of TC can be seen http://lartc.org/lartc.html, very detailed.
This method requires that the operating system on the virtual machine and network-related servers be supported by Linux Advanced Routing & Traffic Control.