We discussed the basic concepts of Neutron, and today we begin to analyze the architecture of Neutron.
Neutron Architecture
Like other OpenStack services, Neutron also employs a distributed architecture that provides network services from multiple components (sub-services).
The Neutron is composed of the following components:
Neutron Server
Provide the OpenStack network API externally, receive requests, and invoke Plugin to process requests.
Plugin
Processes requests from Neutron Server, maintains the status of the OpenStack logical network, and invokes the Agent to process requests.
Agent
Handle Plugin requests, responsible for the network provider on the real realization of a variety of networking functions.
Network provider
A virtual or physical network device that provides network services, such as a Linux bridge,open vSwitch or other physical switch that supports Neutron.
Queue
Neutron between Server,plugin and Agent through Messaging Queue communication and invocation.
Database
Store OpenStack network status information, including networks, Subnet, ports, Router, and more.
The Neutron architecture is very flexible and has many levels, which are designed to:
In order to support a variety of existing or future will appear in the excellent network technology.
Support distributed deployment, and get enough extensibility.
Usually the fish and the cake can not be combined, although obtained these advantages, but this makes Neutron more complex, more difficult to understand. We'll discuss the various components of Neutron in detail later, but before that, it is very important to have an example of how these components work together and how they function.
Take the example of creating a VLAN100 network, assuming that network provider is a Linux bridge, the process is as follows:
-
Neutron Server receives a request to create a network and notifies the registered Linux Bridge Plugin via Message Queue (RabbitMQ).
-
Plugin the network that will be created Information (such as name, VLAN ID, and so on) is saved to the database and notified by the message Queue Agent running on each node.
-
The Agent receives the message and then the physical network card on the node (such as ETH2), create VLAN devices (such as eth2.100) and create bridge (such as brqxxx) bridging VLAN devices.
For Linux Bridge How to implement VLAN you can refer to this tutorial "pre-knowledge-based network virtualization" related chapters.
Here are a few notes:
What plugin solves is the question of what the network is configured to look like? And as for how to configure how the work is done by the agent.
Plugin,agent and network provider are used, for example, the network provider is a Linux bridge, then you have to use the Plungin and agent of Linux Bridge; if NETW Ork provider replaced with OVS or physical switches, plugin and agents have to be replaced.
One of the main responsibilities of plugin is to maintain the status information of the Neutron network in the database, which creates a problem: All network provider plugin write a very similar set of database access code. To solve this problem, Neutron implemented a ML2 (Modular Layer 2) plugin in the Havana version, abstracting and encapsulating plgin functionality. With ML2 plugin, a variety of network provider without the development of their own plugin, only the ML2 to develop the corresponding driver can be, the workload and difficulty are greatly reduced. ML2 will be discussed in detail later.
Plugin is divided into two categories: core plugin and service plugin. Core plugin maintains Neutron netowrk, subnet and port related resources, and the agent for core plugin includes Linux bridge, OVS, etc.; Service plugin provides Routing, firewall, load balance and other services, there are corresponding agents. The following will be discussed in detail separately.
The above is the logical architecture of Neutron, and the next section we discuss Neutron's physical deployment scenarios.
Neutron architecture-5 minutes a day to play OpenStack (67)