Reprint Please specify: http://blog.csdn.net/yeasy/article/details/41788795
History
OpenStack, as the hottest open source project for cloud computing, has undergone 10 major releases since the release of the first version of Austin in October 2010 and the release of the Juno version by October 2014. Basic stability releases a large version update each April and October each year.
The implementation of the network function was introduced from the second version, the Bexar version, initially as a function of the Nova project Nova-network, only to enable all users to share an underlying network (known as the Flat Network), followed by the Folsom version released in September 2012, the network power Can be stripped out as a new Quantum project. In the Havana version released in October 2013, the project was renamed Neutron. In the latest October 2014 release of the Juno release, the distributed routing (DVR) mechanism was introduced, and the support for Nova-network was discontinued.
Updates for each key release are as follows:
Bexar version:
- Introduction of Nova-network
Cactus version:
- Diablo version:
- HA under the flatdhcp Network
Essex version:
- Network function data model starts stripping from Nova to prepare for standalone projects
folsom version:
- officially stripped from Nova to become a new standalone project Quantum
- multi-tenant isolation support
- plug-in architecture supports a variety of network backend technologies, including Open VSwitch, Cisco, Linux Bridge, Nicira NVP, Ryu, NEC
- overlapping
- support provider Networks
- supports basic L3 forwarding, SNAT, floating IP
grizzly version:
- multi-network node support, increased reliability
- security group
- support lbaas
havana version:
- The project name changed from Quantum to Neutron
- Multiple physical networks (Linux Bridge, Hyper-v,ovs) type supports
- Introducing Fwaas, Vpnaas
- Introducing ML2 supports multiple types of two-tier network implementations
icehouse version:
- some new plugins
- new LBaaS driver
Juno version:
- Initial support for distributed routing (DVR) mechanism
- Full IPV6 Support
- HA Support for L3 agent
- Some of the new Vendors feature plug-ins
From the above process can be seen, the network function from the Folsom version began to introduce, experienced the Folsom, Grizzly, Havana, Icehouse Four versions to form a more stable centralized network model. The distributed routing model has been introduced since the latest version of Juno.
Present situation
As one of the foundations of cloud computing platform, the realization of network service is undoubtedly the best embodiment of technical strength. Whether it is full-featured or not, performance and stability, each to meet the requirements of the production environment, are not so easy. This is also the foundation of many startups around OpenStack today.
OpenStack has made it clear that its network is designed entirely in accordance with the concept of software-defined networking (SDN). In fact, even from the beginning of the Folsom version of the network, which has formally become a standalone project, this statement is inaccurate. This ambiguous design concept is also one of the important reasons why OpenStack has received a large number of complaints on Web projects today.
We know that SDN has several characteristics, and the most fundamental is to deal with the relationship between the control plane and the data plane in a loosely coupled way.
OpenStack is designed to separate the control plane from the data plane, all the data is stored in the database, and all the agents listen for messages from Neutron-server and perform local operations based on these messages. From this simple model, Neutron does adopt the SDN model.
But separating the control plane from the data plane is just the first step in the long haul. How to design the data plane, and how to design and implement the control plane, is the most important place.
At this point, how is OpenStack implemented at these two points?
The Openvswitch project, which was born in 2009, provides enough virtual switch implementations to support production environments, seamlessly replacing Linux's own bridge and supporting a range of additional features. It seems to be a good project. Thus, in the first few versions, the network supported both Linux Bridge and Openvswitch. But unfortunately, from the very beginning, when you use Openvswitch, just as a Linux bridge replacement, in the design of new features, but also limited to the features supported by Linux Bridge. This leads to the theory that it can act as a openvswitch for any forwarding component, and in today's Neutron project, most of the time it is used as a two-layer switch.
So, what is the more important control plane? Unfortunately, the performance on this OpenStack is not satisfactory. Although there is no technical loopholes, but at least the control plane lacks unified planning, with modern control plane to see the implementation angle, can only be said to be a bunch of functions put together. In order to solve a part of the function, with the existing technology to solve, regardless of other features of the implementation. This also leads to frequent clashes between different functional modules. Distributed routing mechanism is a natural thing in SDN, but the existing implementation has used fixed address mapping, ARP proxy, multilevel forwarding table, tunnel, L2 Population ... This is not to say that these technologies cannot be used, but the complexity and tight coupling of the implementations will bring more difficulties to the future expansion. As well as enabling routing with high reliability, multi-type service chain and so on, the existing design is difficult to achieve without increasing the complexity.
It is difficult to understand why this design is possible for people who are likely to do web research. In fact, from the perspective of the Linux system itself, such a design has its own reasonable. In the absence of the SDN era, it is common to use Linux to make routers or firewalls, and the various configurations through IPtables and Linux Bridge can always meet the needs of LAN. However, in the cloud era, a physical machine on the dozens of virtual machines, even now hundreds of thousands of containers, but also multi-tenant, billing requirements, security and reliability requirements of high ... There are a lot of scenarios that are hard to imagine before simply applying Linux as a server or gateway. Even through a variety of technical means to settle the basic needs, it can only lead to such a complex situation today. You can also imagine why the network interface is connected to such operations on the switch, which is the responsibility of the Nova compute service in OpenStack.
Here also a little sigh, if OpenStack did not have a NASA project code base, if not selected "Life is too short, I choose Python" Python language development, it may be more unsatisfactory to today's situation.
Of course, using OpenStack in addition to the above pattern, there is another way to use: only Neutron as a framework, so that the backend of the various plug-ins themselves to achieve a variety of network services. In this case, there is no doubt that the existing code to rely on the smallest, and undoubtedly the most extensive with their own mature network solutions of the manufacturers. But such a model is certainly not acceptable to the community. After all, just as a shell forwarded under various calls, this lost the meaning of an open source project as a mature cloud platform.
Future
Although the network project in the design of a number of problems, the author for OpenStack is still respected. Not because of the blind love of open source projects, more, since the Linux project, it is difficult to see so many industry giants and open source of the close cooperation in these years, together with a solution to the actual needs of the project.
In fact, it is important to think about the reason why the Linux project is successful, Linus himself. Not just because of his technical realm in the operating system kernel, the indispensable point is that Linus himself is more "paranoid", he decided that things will not change easily. At the same time, the maintenance of the Linux system kernel for a long period is only Linus. This may give some inspiration to today's OpenStack community. OpenStack has been successful, and it's not that big of a lot to advertise the sponsorship fund and the number of people involved. A small team that truly understands the needs and technologies of cloud computing is often better than a large number of participants who are consciously or unconsciously standing in various positions.
From the current situation speculation, in a long period of time, the network project will continue along the current path, on the one hand in the distributed model of the implementation of new network functions, as well as to resolve the conflicts with the existing functions, on the other hand is still a plug-in form of various manufacturers support their own network solutions. These two ways of conflict are a matter of time, but hopefully the network in OpenStack has opted for a more efficient and extensible framework that can really implement "arbitrary substitution" of the software-defined network.
Attached: OpenStack publishing History (from the official wiki)
Series Status releases DateKilounder Development Due APR -, -Juno Current Stable Release, Security-supported 2014.2 Oct -, theIcehouse Security-supported 2014.1 APR -, theHavana EOL 2013.2 Oct -, -Grizzly EOL 2013.1 APR 4, -Folsom EOL 2012.2 Sep -, -Essex EOL 2012.1 APR 5, -Diablo EOL 2011.3 Sep A, .Cactus Deprecated 2011.2 APR the, .Bexar Deprecated 2011.1 Feb 3, .Austin Deprecated 2010.1 Oct +, .
The history, status and future of the OpenStack Network project (Neutron)