The OpenStack Kilo version, the 11th version of OpenStack, an open source project, was officially released in April 2015. Now is the time to look at what has changed in this version of neutron and what new key features have been introduced.
1. Expand the Neutron development community (scaling the Neutron development community)
To better scale the Neutron development community, we have done two main things in the kilo development cycle: decoupling core plugins and separating advanced services. These changes do not directly affect OpenStack users, but we expect that these efforts will lead to a reduction in code volume and an increase in the speed with which new features are being developed, leading to an eventual acceleration of innovation. Let's take a look at one item below.
1.1 Decoupling Neutorn Core plug-in (Neutron core plugin decomposition)
From a design perspective, the Neutron uses a pluggable (pluggable) architecture that allows a customizable backend for Neutron APIs. The Neutron core plug-in is a core component of the development of the entire Neutron project, which is like an adhesive layer (glue) between the logical API layer and the actual implementation layer. As the neutron project evolves, the more loving plugins are introduced, they come from various open source projects and communities (such as Open VSwitch and Opendaylight), and the vast majority of vendors (vendor, such as Cisco, Nuage, Midokura And so on). At the beginning of the Kilo development cycle, Neutron had more than 10 plugins and drivers, including core plug-ins (cores plugins), ML2 mechanism drivers (mechanism drivers), L3 service plug-ins, and L4-L7 plugins such as FWaaS, LBaaS, and VPN AaS, where most of the plug-in code is placed directly in the neutron project code base. As a result, the number of neutron code that needs to be viewed, including all of these plug-ins and driver code, has grown to the point where the scale of project development can no longer grow. It is unrealistic to have code viewers who are unfamiliar with these drivers or plug-ins, and code viewers who do not have the appropriate hardware or software environment to validate these plug-ins and driver code. At the same time, suppliers will inevitably be disappointed when the code submitted by the supplier cannot be merged in a timely manner.
The first thing to do to improve this situation is to decouple the Neutron core plug-in and ML2-driven code from the Neutron code base. Specifically, only in the Neutorn code tree for each of the plug-ins and drivers mentioned above to retain a thin agent (shim/proxy) layer, and then put all their back-end implementation logic to the different code base, a more natural choice of the new code base is Stackforge. The benefit of this is obvious: the Neutron code Inspector can focus on the Neutron core code, while the vendor and the maintainer of the plugin can iterate over their own iteration cycles. The community has encouraged the code providers to do this decoupling work immediately, but it does not force all plugins to do all the work before the kilo release, mainly to leave enough time for each vendor.
For more information about this process, read the documentation, which is designed to track the progress of all plug-ins.
1.2 Split premium Service (Advanced Services split)
The first thing above is just looking at the neutron core plugin and ML2 driver, a parallel job is to do the same thing for L4-L7 Advanced Services (FWaaS, LBaaS, and Vpnaas). Similar to the core plug-in, these advanced services also store code in the neutron master code base, a similar consequence of which is the loss of focus for the neutron core Code inspector. Starting with Kilo, the code for these services will be separated into their respective code tree. So now Neutron has four different code libraries: one for the base L2/L3 network, one for Fwaas, LBaaS, and Vpnaas for each. Because the current high-level service plug-in is still relatively small, the current vendor and plug-in code will be retained in the code base of each service.
It is worth mentioning that this does not affect OpenStack users. Even if these services are separated now, their APIs or CLI will not change, and they will use the same Neutron client as before. At the same time, we do anticipate that, as each premium service has the potential to separate from neutron and become a standalone component, the separation that is being done is indeed the basis for deeper changes in the future, and they may provide their own REST endpoints, configuration files, and CLI/API clients in the future. This also allows their development teams to focus on one or several advanced services, potentially making greater changes.
2. Ml2/open vSwitch Port Security (Ml2/open VSwitch port-security)
Security group (security-groups) is one of the most commonly used features of neutron, which allows tenants to specify the type and direction (in/out) of network traffic that is allowed to pass through a neutron port, thereby effectively creating firewalls for virtual machines.
From security considerations, the implementation of the Neutron security group will always automatically create rules that block IP address spoofing attacks to avoid the virtual machine receiving or issuing a network packet that does not match the MAC or IP address of the VM's Neutron port. When most users consider security groups and anti-IP spoofing rules to be very useful and necessary, some require additional switch options to allow them to not create such rules on specific ports. The main use case for this requirement is that NFV is a common example when running network services on a virtual machine. Consider an example of deploying a routing application in an OpenStack virtual machine: it needs to be able to receive packets that are not sent to it itself, and it needs to be able to emit packets that are not generated from any of its ports. When a security group is used, this virtual machine is not able to do these things.
Let's take a look at the topology diagram for this example:
Host 1 is a virtual machine, its IPv4 address is 192.168.0.1, and now it needs to access Host 2, its IP address is 172.16.0.1. The two hosts are connected by two virtual machines (R1 and R2) running the routing application, each configured as the default route for the Master 1 and 2. The default IP address for the port is displayed. Let's look at how Host 1 is contracted to host 2:
- Host 1 generates a IPV4 network packet, the source IP is 192.168.0.1, the destination IP is 172.16.0.1. Because two VMs are on different network segments, R1 uses its own MAC address to respond to the ARP request of Host 1, so the destination MAC address of the network frame generated by host 1 is 3b-2d-b9-9b-34-40.
- R1 received the package. Note that the destination IP for this package is 172.16.0.1, not R1 itself. After the neutron security group is applied on the port of R1, the anti-IP spoofing rule is applied by default, and the packet is discarded, so the R1 cannot complete further routing.
Before the Kilo version, you would either prohibit security groups or use security groups for the entire cloud. Starting with the Kilo version, you can use the new attribute port-security-enabled to enable or disable security groups on a port. This new property is currently supported by the Open VSwitch agent and Iptablesfirewalldriver.
Back to the previous topology, you can now disable security groups on ports on R1 and R2, while using security groups on the port of the host VM. When you do this, you will be able to do the routing normally.
More information, as well as configuration examples, can be found on Red Hat's Terry Wilson blog.
3. IPv6 Enhancement (IPV6 enhancements)
With some of the new features introduced in the Juno release, including the ability to use SLAAC and DHCPv6 to assign IP addresses to tenant networks, and RAS messages that support advertisements generated by external routers to physical networks, IPV6 has become a new focus area for Neutron. The IPV6 feature is further matured in the Kilo release, thus bringing some additional enhancements, including:
- Allows multiple IPV6 prefixes to be assigned to a network
IPV6 allows multiple IP prefixes to be assigned to a network card. This is a common configuration, and typically, all NICs are assigned a local connection address (link-local, LLA) to handle the network traffic on the local connection, where one or more network cards are assigned a global unicast address (Globals unicast Addresses, GUA) to handle end-to-end network connection traffic. Starting with the Kilo version, users can assign multiple IPV6 subnets to a network. When the subnet type is SLAAC or stateless DHCPv6, a neutron port is assigned a IPV6 address from each subnet.
- Better IPV6 Routing Support
In the Kilo version, OpenStack IPV6 does not have network address translation (Nat-network addresses translation) or floating IP. This is assuming that the virtual opportunity is assigned to a global routable address (globally routed addresses) so that it can communicate with the pure L3 route. The Neutron-l3-agent component in neutron is responsible for routing within the Neutorn by creating and maintaining a virtual machine router. The following two features are required to support IPV6 in a virtual machine router:
-
- Routing between subnets: This refers to the routing of network packets between different IPV6 subnets of the same tenant. Because these network traffic is routed within the OpenStack cloud, they do not leave the OpenStack cloud to any external systems, which are often referred to as "things" routes. This feature is supported in the Juno version, and there is no big change in kilo.
- External routes: This refers to the route between the IPV6 tenant subnet and the IPV6 external subnet. Because these traffic needs to leave the neutron network to go outside the network, they are often referred to as "north-South" traffic. Because there is no IPv6 NAT support, the virtual machine router (virtual router) simply needs to route between the internal subnet and the external subnet. This feature is supported in the Juno version, but in kilo the main optimizations are made for operations personnel (operator) to create an external network, and now they do not need to create a neutron subnet for the external network. The Meutron virtual router can automatically learn the default gateway information via SLAAC (if RAS is enabled on the upstream router) or manually specify the default gateway using the new Ipv6-gateway configuration item in the L3-agent configuration file by the OPS operator.
With Neutron, users can specify additional DHCP options for subnets. This is primarily used to assign additional information such as DNS or MTU to the Neutron port. Initially, these configurations can only be used on port DHCPv4 or DHCPV6 addresses, and the problem is that it is not available when a port is assigned both IPV4 and IPV6 two addresses.
Starting with the Kilo version, additional DHCP options can be set for DHCPV4 and DHCPV6. The API for Neutron Create and update ports (port) adds a new parameter, "Ip_version," that specifies the IP version (4 or 6) of a given DHCP option.
4. LBaaS v2 API
LBaaS is one of the Neutron premium services. It allows tenants to create load balancers on demand, and the backend uses a single source or closed source service plug-in based on different load balancing technologies. The open source solution on the red Hat Enterprise Linux OpenStack platform is based on Haproxy.
The LBaaS v1.0 API includes basic load balancing capabilities, enabling a simple and straightforward workflow to set up load balancing services:
- Create a pool
- Create one or more members for a pool
- Create a health state monitor
- Create a virtual machine IP associated with a pool
This implementation can be helpful in the initial implementation and deployment of Lbaas, but it is not sufficient to provide an enterprise-class advanced load balancer. The LBaaS 2.0 provides a more robust load balancing scheme, including support for SSL/TLS endpoints. To achieve this goal, you need to redesign the Lbaas architecture, specifically you can refer to HAProxy reference plugin.
5. Increased DVR support for VLAN mode (distributed Virtual Routing (DVR) VLAN supports)
DVR, first introduced in the Juno release, allows the deployment of Neutron virtual routers across compute nodes, so that each compute node can provide routing services for the VMS running on it. This improves the performance and scalability of virtual routers and is seen as an important milestone in achieving a more efficient L3 network.
As a reminder, in the default OpenStack neutron architecture, a dedicated network node cluster is used to handle most of the network services in the cloud, including Dhcp,l3 Routing and NAT. This means that the network traffic that is emitted from the compute nodes needs to go through the network nodes to be properly routed. By using DVR, a compute node can handle the routing of its local virtual machine between segments (things) and the NAT of a floating IP. DVR still relies on private network nodes to provide default SNAT services to allow virtual machines to access external networks.
Before Kilo, DVR only supported using tunnel network including GRE and VXLAN to do tenant network isolation. In this case, it hinders the user from using the VLAN tenant network. In Kilo, DVR increased the support of VLAN, so now DVR can support the tunnel network can also support VLAN.
More information about DVR, it is strongly recommended that you read Red Hat's Assaf Muller's three great Blog:overview and East/west routing, SNAT, and floating IPs:.
6. View the status of the HA virtual router (view the state of highly Available routers)
An important feature introduced in the Juno release is the L3 ha scenario, which allows the neutron-l3-agent of active/active ha mode to be set on multiple network nodes. This scenario is based on keepalived, which uses the VRRP protocol internally to form a highly reliable virtual router group. By design, only one active router per group is responsible for network forwarding, and one or more alternate routers, which take over as a new active router when waiting for the active router to fail. The primary/standby routers are randomly deployed on different network nodes, so the load is apportioned on those nodes.
One limitation based on the Juno scenario is that Neutron is unable to report the status of the HA router, which can make it difficult to locate and maintain the problem. In the Kilo version, OPS can run neutron l3-agent-list-hosting-router <router_id> commands to view the active router on that network node.
7. Allow selection of floating IP (ability to choose a specific floating IP)
Floating IP is a IPV 4 address that is dynamically assigned to a virtual machine, and after that, the virtual machine can be accessed from outside the network, such as the Common Internet. At first, when assigning a floating IP to a virtual machine, the IP address is randomly selected from the address pool, so there is no guarantee that a virtual machine will be assigned to the same address when it is allocated multiple times. Starting with the kilo version, users can assign specific floating IP addresses to a virtual machine by using the new floating_ip_address API parameters.
8. MTU Broadcast function (MTU advertisement functionality)
This new feature allows you to configure the (expected) MTU of a network and publish it to the guest operating system. This new feature avoids MTU inconsistencies in multiple networks because of the unpredictable consequences of this inconsistency, such as network connectivity issues, packet loss, and network performance degradation.
9. Improve performance and stability (improved performance and stability)
The OpenStack network community has always been committed to providing a more stable and code-mature Neutron. Among the many performance and stability optimization improvements offered by Kilo, I would like to highlight two: direct use of OVSDB and Ml2/ovs plug-in communication instead of OVS OVS-VSCTL commands, and extensive refactoring of l3-agent code.
Although neither of these improvements has introduced new functionality to the user, they are indeed a representation of the community's efforts to optimize Neutorn code, especially the core L2 and L3 components, which are critical to all workloads.
10. Outlook Liberty version (looking ahead to Liberty)
Liberty, the next OpenStack version, is scheduled for release on October 15, 2015. We are nervously preparing for the Vancouver Design summit, and the new features and improvement proposals will be discussed. You can view the Neutron specifications for Liberty page to track which proposals are accepted into Neutron and which are implemented in the Liberty version.
Original: What's Coming in OpenStack Networking for the Kilo release,posted on:may one, 2015,by Nir Yechiel
New changes to Neutron in the OpenStack Kilo release