The previous sophomore layer technology, typically using Is-is routing technology at the bottom of the physical network, is based on a two-tier extension of the data center network, such as public trill, SPB Technology, and Cisco private OTV, Fabricpath technology, and some of the leading network virtualization technologies, The Vxlan, Nvgre and other protocols are used to break the limits of VLANs and Macs, extending the data center's sophomore network to a larger scale. With VMware NSX, it's a step further--we can provide the same virtualization capabilities to the network that have been implemented for compute and storage. Just as server virtualization can programmatically create, delete, and restore software-based virtual machines and take their snapshots, NSX network virtualization also implements these same functions for software-based virtual networks. This is a radically revolutionary architecture that not only enables data center managers to greatly improve the agility, maintainability, and scalability of the system, but also greatly simplifies the operational patterns of the underlying physical network. NSX can be deployed on any IP network, including all traditional network models and the next generation architecture provided by any vendor, without the need to refactor the underlying network. It is easy to see that the core idea of VMware NSX is to migrate the server virtualization technologies that VMware has been working on for many years to the network architecture (as shown).
When server virtualization is implemented, the Software abstraction layer (server virtualization hypervisor, hypervisor) reproduces the familiar x86 physical server properties such as CPU, RAM, disk, and Nic in software, which can be assembled programmatically in any combination, in just a few seconds, To build a unique virtual machine.
When network virtualization is implemented, the Network Virtualization Manager function, similar to hypervisor, reproduces the entire network of two to seven layers in the software, such as switching, routing, access control, firewalls, QoS, and load balancing. So, as with the idea of server virtualization, we can combine these services programmatically in any combination, creating a unique isolated virtual network in just a few seconds.
In addition to this, NSX-based network virtualization solutions offer additional benefits. For example, just as virtual machines are independent of the underlying x86 platform and allow physical servers to be treated as compute capacity pools, virtual networks are also independent of the underlying network hardware platform and allow the physical network to be considered as a pool of transport capacity that can be used and tuned for use on demand. Unlike traditional architectures, NSX can programmatically provision, change, store, delete, and restore virtual networks without reconfiguring the underlying physical hardware or topology. This revolutionary networking approach allows businesses to match the capabilities and benefits of a familiar server virtualization solution.
Due to the qualitative changes in the underlying network architecture after the NSX solution was used, the data center built on the NSX network platform ultimately achieved the effect that regardless of the size of the system, regardless of the number of physical servers, virtual machines, regardless of the complexity of the underlying network, regardless of how many locations the multi-site data center spans, With the help of the NSX network, for IT administrators and users, thousands of virtual machines running on top of a complex network in a multi-site data center appear to be running on a single physical server.
NSX does not need to care about the underlying physical network, using the logical network that it creates, is it necessary to deploy it in VMware's virtualized environment? The answer, too, is negative. NSX can be deployed in many virtualized environments, such as VMware Vsphare, KVM, Xen, or in an OpenStack environment. In the mainstream virtualization platform, only Microsoft Hyper-V currently does not support NSX.
NSX Network Virtualization sub-vsphare environments NSX (NSX-V) and NSX (NSX-MH) in a multi-virtualized environment are different software, the latest versions are 6.2.0 and 4.2.4, which need to be understood before deployment. Among them, NSX-MH is more like the original Nicira NVP platform, mainly on the KVM and Xen, based on the OvS to achieve network virtualization.
However, the basic logical schema is the same regardless of whether you are using nsx-v or NSX-MH, and the difference is only for some components in the data plane (such as NSX-V, the virtual switch is vsphare distributed switch, and NSX-MH, the virtual switch is OvS). Basic to the NSX Network virtualization architecture. It is based on the underlying physical network, in the logical network, sub-data plane, control plane, management plane. Among the data planes, there are distributed services (including logical switches, logical routers, logical firewalls), and NSX Gateway services. The primary component of the control plane is the NSX controller. The primary component of the management plane is NSX manager. Please look at the following.
- With these components, NSX can provide the following feature services:
Swap: The expansion of a sophomore switched network anywhere in the network, regardless of the underlying physical network.
- Routing: Routing between IP subnets, which can be done in a logical network, does not require traffic out to the physical router or layer three switch. This routing is performed at the hypervisor layer of the virtual machine, which consumes very little CPU and provides the best path for routing tables within the virtual network architecture.
- Firewall: With this feature, security protection can be performed at the hypervisor layer and at the virtual NIC level. This will enable firewall rules to be implemented in an extensible manner without causing bottlenecks on physical firewall devices. Firewalls are distributed over the hypervisor layer, generating very little CPU overhead and being able to execute at wire speed.
- Logical load balancing: Supports four to seven tiers of load balancing and is capable of performing SSL termination.
- VPN service: Two or three-tier VPN service is possible.
The architecture of Nsx-v is very simple, because its logical hierarchy is very clear-the management plane, the control plane, the data plane, the components in each plane are not many. However, some people think that it is very complex, because in-depth study of the data plane, people will find that although its functional implementation seems very simple, but its forwarding principle, forwarding behavior is very complex. Insert a sentence, just is the whole introduction, now enter NSX-V introduction. This is the architecture diagram of the nsx-v.
The main component of the control plane is the NSX controller. The primary component of the management plane is NSX manager. The main difference between nsx-v and NSX-MH is the component of the data plane.
Nsx-v's data plane, distributed services (including logical switches, logical routers, logical firewalls), and NSX Edge Gateway Services, distributed services are primarily used for things-to-traffic communication between terminals (virtual machines), while NSX The Edge Gateway Service is primarily used for North-south traffic between logical and physical networks and features that some distributed services cannot implement, such as load balancing. Its underlying virtualized environment, for ESXi Hypervisor, which is a pure vsphare environment, does not allow any other virtualization platform to intervene, while NSX-MH's underlying virtualized environment can be any Hypervisor except Hyper-V. NSX-V distributed services are built on Vsphare distributed switches, while in NSX-MH they are built on OvS. The NSX Edge Gateway is also primarily built using virtual machines from an ESXi host, and in NSX-MH, this component is replaced with a slightly missing NSX two layer/three layer gateway.
Based on the logical architecture of NSX-V, it is easy to see that the network abstraction and distributed services provided by these components of the logical network are fully decoupled from the physical network. The communication between virtual machines, which can be completed entirely within the logical network through the Vxlan package, has no relation to the underlying architecture of the physical network, and only when the virtual machine communicates with the outside world through the NSX edge, the Routing and switching strategy with the physical network is considered.
The NSX-V management plane component, NSX Manager, is a cloud Management Platform (CMP) for operations and management. In the NSX-V environment, NSX Manager is installed as a virtual machine in an ESXi host in the form of virtual machines. The function of NSX Manager is primarily to configure the logical switch and connect the virtual machine to the logical switch, after which the distributed logical routers, distributed firewalls and other services can be configured on top of the logical switch. Additionally, the NSX Edge Gateway Service is configured on the NSX manager interface. NSX Manager provides a configuration interface (UI) for administration, and as an API connection point for NSX, most of its configuration can be done on this graphical configuration interface without the need for developers to use programming languages.
In a nsx-v environment, NSX Manager needs to be connected to and integrated with vcenter and requires a 1:1 match between them, in other words, if we deploy two vcenter in the Vsphare virtualization platform, then we need to deploy two NSX Manager. NSX Manager is able to deploy and configure the Nsx-v network virtualization platform by registering to vcenter and providing a plugin on the vsphere Web client to access the plugin's icon.
Several of the main responsibilities of NSX Manager are as follows:
1. Configure the NSX controller cluster.
2. Install vsphere installation Bundles (vibs) on top of the ESXi host's hypervisor to turn on Vxlan, distributed routers, distributed firewall capabilities, and interact with signaling information from the NSX controller cluster.
3. Configure the NSX Edge Service gateway to correlate network services such as load balancing, VPN, and Nat. Create templates, snapshots, and more for a variety of network services, enabling fast and automated deployment of logical network capabilities. Create a self-signed certificate for the NSX controller to allow ESXi hosts to join the NSX domain to enhance the security of the NSX control plane.
4. Provide network traffic collection and monitoring functions. The NSX controller is a component of the control plane. Its responsibility is to manage the switching and routing modules on top of the hypervisor. It is also deployed on ESXi hosts in the form of virtual machines. To facilitate scaling and high availability, NSX controllers are generally recommended to be configured as cluster mode (VMware recommends configuring at least 3 NSX controllers as clusters). Each NSX controller recommends using at least 4 Vcpus and 4GB of memory and must be deployed in the same vcenter. In a deployment, the password set by the first node in the cluster is synchronized to the other nodes in the cluster.
The NSX controller acts as a control plane and is responsible for the centralized policy control of forwarding plane traffic. It can provide:
- Distributes Vxlan and logical routing information to ESXi hosts.
- Set up an NSX controller cluster to distribute the workflow within the cluster.
- Removes multicast routing from the physical network. Customers do not need to provide multicast IP addresses or provide physical network devices that support PIM routing or IGMP snooping.
In the Vxlan network environment, the ARP broadcast traffic is suppressed, and the ARP broadcast flooding of the two-layer network is reduced.
In the NSX-V architecture, the data plane is built on VDS. VDS needs to be enabled on each ESXi hypervisor, and each ESXi host has a user space and a kernel space (as shown).
We implemented NSX-V capabilities by installing VMware installation Bundles (vibs) into the kernel space of the ESXI host hypervisor-distributed switching and routing, distributed firewalls, and Vxlan encapsulation and unpacking.
The user space is the component used to provide the communication path with the control plane and the management plane;
RABBITMQ Message Queuing is used for VSFWD (RABBITMQ clients) and for establishing communication connections with RABBITMQ servers hosted on NSX Manager as a process. Through this communication connection, NSX Manager sends various information to the ESXI host: The policy rule is used to implement the distributed firewall above the kernel module, and the Controller node IP address, private key and host certificate are used for channel authentication between the host and controller for secure creation, Deletes a distributed router instance.
The User World Agent process (NETCPA), which is the SSL communication channel established by ESXi Hypervisor and the controller cluster, is then TCP connected. Using the control plane and ESXi hypervisor channels, the NSX controller can populate its own table entries with MAC address tables, ARP tables, and VTEP tables to maintain fast communication of established connections in the logical network.
The above is the overall structure, we'll talk about the data plane
First look at NSX switching and "virtual-to-physical" connections, as well as bridging the Vxlan to the VLAN.
In the NSX network virtualization platform, the logical switch has completely integrated the isolated two-tier network, which greatly improves the flexibility and agility of the user. Regardless of virtual or physical terminals, you can easily connect to your own logical subnets in the data center and establish a separate link connection that is independent of the physical topology. All of these advantages lie in the decoupling of the physical network (underlay) and the Logical Network (OVERLAY) to achieve NSX network virtualization.
Is the separation of the physical network and the logical network architecture. The logical network is completely independent of the underlying physical architecture by using Vxlan Overlay, which allows a two-tier network to scale horizontally across different server racks, even spanning different data centers.
Vxlan, which is based on VLANs, can scale exponentially (up to 16,777,216 network segments) on the VLAN limit of the 802.1Q protocol.
The Vxlan is a tunnel forwarding mode that encapsulates the Ethernet packets on the UDP transport layer. It defines an entity called Vtep (Vxlan tunnel endpoint), which is used to encapsulate and encapsulate traffic on both ends of the Vxlan tunnel, and in the VMware NSX platform we use the virtual machine kernel interface to handle vtep. It adds a new label vni (vxlan identifier) to replace the VLAN identity Vxlan segment, and in the VMware NSX platform, the VNI number starts at 5000.
In addition, in the VMware NSX platform, we use the VETP proxy mechanism, which is responsible for transferring Vxlan traffic from the local subnet to another subnet. The transport area (Transport Zone) is the configurable boundary of the VNI. vsphere clusters in the same transport area, using the same vni, a transport area can contain ESXi hosts in different vsphere clusters, and of course, a vsphere cluster can be part of a different transport area. The transport area informs the host or cluster that the logical switch was created.
Here, we briefly describe the process of establishing a two-tier connection between virtual machines in the Vxlan overlay, different ESXi hosts:
1. Virtual Machine 1 initiates a frame request that goes to virtual machine 2 on the same logical subnet.
2. The ESXi host where virtual machine 1 is located defines a vtep that encapsulates the traffic before it is sent to the transport network.
3. The transport network only needs to know the IP address of the source destination ESXi host to establish a Vxlan tunnel between two addresses.
4. The purpose of the ESXi host is to receive a vxlan frame, unpack it, and confirm its two-layer subnet (using the VNI logo).
5. Finally, this frame is passed to virtual machine 2.
The transfer process is as follows:
The above is the basic situation where the Vxlan traffic is encapsulated, unpacked, and eventually communicated to each other by the hypervisor of the source ESXi host when the two VMS of different ESXi hosts need direct communication, which is still relatively simple and easy to understand. But sometimes, in three cases, Vxlan traffic is initiated from a virtual machine and needs to be sent to all virtual machines that are hung under the same NSX logical switch:
1. Broadcasting (broadcast)
2, do not know the purpose of unicast (Unknown Unicast)
3. Multicast (multicast)
We usually call this multi-purpose flow, called bum (broadcast, Unknown Unicast, multicast). In either of the three scenarios, the traffic originating from the source virtual machine is replicated to multiple hosts on the remote side of the unified logical network. NSX supports three different replication methods to support the multi-purpose communication of its logical switch-based Vxlan traffic:
1. Multicast (multicast)
2. Unicast (Unicast)
3. Mixing (Hybrid)
There may be many scenarios for two-tier network traffic from a virtual network physical network. Some typical scenarios are described as follows:
1. Deploy a typical multi-tier application model-this is generally the web (front-end) layer, the application layer, the database layer. We tend to use "many virtual one" technologies in database deployments, such as Oracle RAC Solutions, which require multiple physical servers to be clustered and treated as a database server, and such applications often do not run in virtualized environments, as in this case. They don't need to be "a little more" when the database server is not running in a virtualized environment, we need to create a "virtual-to-physical" two-tier communication from the application tier to the database layer in the same subnet.
2. Migration of physical servers to VMS-even physical to virtual (P2V). As server virtualization thrives, most organizations want to migrate applications that were previously deployed in physical servers to virtualized environments, and during the migration, they may need to implement "physical-to-virtual" two-tier communication in different physical and virtual nodes in the same subnet.
3. Use the external physical device as the default gateway-in which case a physical network device may be deployed as the default gateway for the virtual workflow to connect to the logical switch. Therefore, a two-tier gateway feature needs to be established, which is also "virtual-to-physical" two-layer communication.
4. Deploy other physical devices-such as "virtual-to-physical" two-tier communications when the data center is not a physical firewall or physical load balancing device independent of the NSX virtual network. This is also an important means to realize the NSX ecosystem.
Deploying other physical devices-such as virtual-to-physical two-layer communication, when the data center is not a physical firewall independent of the NSX virtual network, or a physical load-balancing device. This is also an important means to realize the NSX ecosystem.
The simplified UI used in NSX Manager to configure logical routers is very easy to configure and maintain, where we use dynamic routing protocols to discover and declare NSX logical routes. The control plane is still running in the NSX controller cluster, and the data plane is handed over to the ESXi host's hypervisor for processing. In other words, within the virtual machine we can do the various operations of routing, including algorithms, Neighbor Discovery, path selection, convergence, etc., without leaving the virtualized environment, in the physical network platform for processing. With the NSX logical routing feature, we can easily select the best path for routing in a virtual three-tier network. In addition, we also better implement a multi-tenant environment, such as in the virtual network, different vni have the same IP address, we can deploy two different distributed router instances, a connection tenant A, a connection tenant B, to ensure that the network does not cause any conflicts.
NSX Manager first configures a routing service. During the configuration process, NSX Manager deploys a virtual machine that manages logical routing, which is a component of the NSX controller, supports OSPF and BGP protocols, and in the previous section, we have configured a kernel module for ESXi Manager. In the logical routing function, it is equivalent to the line card that indicates the three layer routing function in the chassis switch, can get the routing information, interface information from the NSX controller cluster, and is responsible for forwarding all the functions of the plane.
A typical deployment pattern that connects different logical switches to logical routes and interacts with the physical network. There are two purposes for this deployment:
1. Connect terminals belonging to a separate two-layer network (whether the terminal is logical or physical)
2, the connection belongs to the logical network terminal and belongs to the external physical three layer network equipment
The first scenario, which often occurs within the data center, is for things to flow. In the second case, for north-south traffic, the data center is connected to an external physical network (such as a WAN, the Internet, and so on).
The first is what we typically call NSX distributed roadbed routing, and the second, edge routing, where we implement routing on the NSX edge gateway.
The logical schema diagram is as follows:
After analyzing the NSX distributed logical routing, we analyze the advantages of such a deployment over the use of traditional physical network deployments.
In a traditional physical network, for a Web server in the same host with the app server communication, because they are in different network segments, requires three layer switch to handle the traffic between them, so the traffic will need to go through the Tor two layer switch to the core switch after the host, To return to the two-layer switch and re-enter the host, a 4-hop connection is required. Of course, if the Tor switch turns on three-tier functionality, communication requires only 2 hops. In the NSX environment, since we have three layers of functionality directly at the hypervisor level of the host, the Web server and the app server are direct connected, and the communication connection between them is 0 hops, as shown in. It is noteworthy that, regardless of the nsx-v or the NSX-MH environment, the effect on the number of traffic jumps is consistent. There is no time to go into the analysis today, the technical details in fact very much.
For a three-tier connection between different hosts, the NSX environment can also refine the 4-hop connection required in a traditional physical network to 2 hops (as shown).
After a brief introduction to switching and routing, we entered NSX Edge. What services can NSX edge gateways achieve? Let's look at a real-world configuration Interface:
All of the features written here are supported by edge. Where Nat is divided into two types, Source NAT and destination Nat. It is used to transform the address of the external network and the internal information, or to protect the sensitive data.
NSX Edge has two deployment load balancing Services-single-arm mode and online mode. The single-arm mode is also called the proxy mode, which is deployed exclusively in the data center using an NSX edge.
A traffic model that interacts with an app server and provides services externally to a virtual machine within the same host, using traditional mode and NSX single-arm mode to deploy load balancing. In traditional mode, for the three-layer traffic between Web server and app server, it needs to go through two layer switch, three layer switch, bypass mode to deploy firewall, then go back to three layer switch, second layer switch, physical server host to communicate, this process requires 6 hop connection. After the Web server is connected with the app server, it can provide Web services externally, the external access to the Web server's north-south traffic, need to reach the core switch, go around the firewall to filter (such as only allow HTTP and HTTPS traffic) and return to the core switch, this time, You need to go around the load Balancer server again to process and return to the core switch before you can reach the host and access the Web service through a two-tier switch, which has a total of 7 hop connections, and the total process is 13 hops. In an NSX environment, Web servers and app servers in the same host can have a direct connection within the logical network (0 hops) while they are in different network segments, while external access traffic passes through the core three-layer switch and the second-tier switch to NSX Edge, where the NSX The edge is threaded between the logical network and the physical network, providing a firewall service, and then returning to the two-layer switch to access the load-balanced Web service on the host, as the single-arm mode, the load balancer and the Web server is directly connected (0 hops), so the entire process only 5 hop connection.
For VMs in different hosts, it is similar to the same in the host. As shown, application load balancing using traditional mode is still 13-hop connection, while in NSX environment, only 7 hops is required, more than 2 hops for deployment of the same host because the logical network traffic between different hosts needs to go through the two layer switch to another host.
In-line mode (transfer mode) The method of load balancing is in contrast to the single-arm mode, which is the mode of the centralized NSX edge to provide routing and load-balancing services. The deployment topology diagram deployed in the datacenter is shown below, and we can see that the topology leverages the NSX edge between the logical network and the physical network to deploy load balancing.
We also discuss the traffic model when deploying load balancing using online mode. As shown, for a virtual machine within the same host, in traditional mode, the traffic between the Web server and the app server still interacts with 13 hops. In the NSX environment, both the online mode and the single-arm mode are 5 hops because we have not added any extra paths to the system by enabling both the Firewall service and the load Balancing service on top of the NSX edge between the physical network and the logical network.
Similarly, after deploying the online mode of load balancing, the Web server between the different hosts takes the same flow of traffic between the app servers as the single-arm mode-13-hop reduction for traditional deployments to 7 hops (as shown).
The advantage of online mode is that it is equally easy to deploy and allows the server/virtual machine to have full visibility of the original client's IP address. However, from a design standpoint, it is often necessary to force a default gateway for the logical segment of the server farm, which means that only centralized routing (not distributed routing) can be used in these segments, so it is not very flexible to deploy.
The two-tier VPN deployment with NSX edge allows for two-tier connectivity between two different, separated data centers, enabling VMs to be migrated between different data centers, and storage can be replicated and backed up across the data center. In addition, this approach can be used for connections between private and public clouds-many businesses want data centers to be redundant, but to save costs, they build a data center and use the public cloud as a backup to the datacenter.
Three-tier VPN, which is used primarily for remote access clients to connect to data center resources. In general, remote office workers use an SSL VPN to connect to the data center, Access data center services, and remote office uses an IPSec VPN to connect to the datacenter.
The use of SSL VPN to connect to NSX edge, which deploys a three-tier VPN, is referred to as "SSL Vpn-plus".
VMware NSX is not only a network virtualization platform, but also a secure virtualization platform. With the deployment network pattern class, it provides and deploys 2-7 layers of security in a software way. In the NSX network virtualization platform, there are two types of firewall capabilities available. One is the centralized virtual Firewall service provided by NSX Edge, which is mainly used to deal with North-south traffic, and the other is a distributed firewall based on the differential segment technology, which is mainly used to deal with the traffic of things. In the NSX network virtualization platform, the logical topology architecture of both NSX Edge Firewall and distributed firewall is used, such as:
There are three components that are closely related to the work of the NSX distributed firewall. It is important to explain here, because in the NSX network virtualization platform, in the distributed logical switch, distributed logical routers, we have NSX manager as the management plane component, NSX controller as the control plane component, and in the NSX distributed firewall here, The components of the management plane and the control plane are very different, as we mentioned earlier, we have said that the distributed firewall communicates directly with the NSX manager through the VSFWD service process. The management plane and control plane and data plane components of the NSX distributed firewall are described below:
1. Vcenter Server: In the deployment of the NSX distributed firewall, we use vcenter as its management plane. We use the vsphere Web client ship distributed firewall policy rules, and then each of the vcenter clusters, VDS port-group, logical switches, virtual machines, VNIC, resource pools, etc., can be used with these source-and purpose-based policy rules.
2. NSX Manager: In the deployment of NSX distributed firewalls, we use NSX Manager as its control plane. When NSX Manager receives the policy rules for vcenter Server, they are stored to the local database and the distributed firewall policy rules are pushed synchronously to the ESXi host. Once the policy rules are changed, they are always released and pushed synchronously within the system. NSX Manager can also receive firewall policy rules directly through the CMP's Rest API, which may be provided by a third-party security platform, such as Paloalto.
3. ESXi Host: In the deployment of the NSX distributed firewall, we use the ESXi host as its data plane. The ESXi host receives the distributed firewall policy that it pushes from NSX manager, translates the rules, and uses the expired kernel space to execute the policy in real time. This enables all virtual machine traffic to be checked and executed at the ESXi host. For example, when virtual machines in different ESXi hosts-1 and virtual machines-2 need to communicate, the policy will be processed by the firewall rules when the traffic of the virtual machine-1 needs to leave ESX-1, and when the traffic needs to enter ESXi-2, then the secure traffic arrives at virtual Machine 2.
The distributed firewalls in the NSX network virtualization platform can be implemented with the following features:
- Isolation (Isolation): This is the basic functionality that firewall deployment needs to implement in an enterprise or data center. It isolates unrelated networks completely, thus preserving their independence, for example: development, Test, and production networks. Isolation is a requirement for multi-tenancy in a virtualized environment. Any isolated, stand-alone virtual network can handle workflows anywhere within the data center, because in a network virtualization environment, virtual networks that are not related to the stratum physical network have been implemented, so long as the virtual network is connected, there is no need to care about the physical location of the virtual machine in the data center. Any standalone virtual network can contain workloads that are distributed anywhere in the data center, and workloads in the same virtual network can reside on the same or different hypervisor. In addition, workloads in multiple standalone virtual networks can reside in the same hypervisor.
In fact, in the virtual network built by overlay technology such as Vxlan, the different network segments are inherently isolated. However, when different network segments need to communicate with each other, it is necessary to invoke the security policy of the distributed firewall to isolate some sensitive traffic. In isolation, we do not need to introduce any physical subnets, VLAN information, ACLs, and firewall rules, all of which are done internally by the virtualized environment.
- Fragmentation (segmentation): A security policy that is associated with isolation, but used in a multilayer virtual network, is segmented. It enables the partitioning of security zones between related security groups and communicates based on security policies. In NSX network virtualization environments, we call segmentation technology a differential segment (micro-segmenting), because with NSX distributed firewalls, we can divide traffic into multiple granular traffic, providing security for every small network segment.
- Advanced Services: NSX network virtualization Platform, in the virtual network, provides a firewall from layer two to layer four, and the implementation of the differential segment. In some environments, however, applications require a higher level of network security policies for protection, in which case users can leverage the NSX platform to integrate third-party security vendors ' four-to-seven security services on top of them, providing a more complete and comprehensive application-based security solution. NSX integrates third-party network security services into a virtual network, through logical channels, to the Vnic interface, enabling applications on the Vnic backend to use these services.
Some of NSX's key security partners include Paloalto, Intel (acquired McAfee), CheckPoint, Symantec, TrendMicro, and more. The security team of the enterprise can choose different security vendors ' solutions in the VMware ecosystem for different applications.
After discussing the implementation of the distributed firewall, we analyze the traffic model of distributed firewall as well as the last of distributed routing and load Balancing. For traditional firewall deployments, the three-layer communication between the Web servers in the same host and the app servers requires 6 hops to connect, because the flow of traffic to and from the bypass mode connects the core switch's physical firewall with a 2-hop connection, which The traditional three-layer 4-Hop connection (previously elaborated) has a 2 jump. The same as distributed routing, because the distributed firewall function is also working on the host's hypervisor, so in the NSX environment, the same host Web server between the app server's three-layer communication is also directly connected (0-hop), as shown in.
For three-layer communication between different hosts, similar to the description in the Distributed Routing section, we can also refine the connection to 2 hops (as shown).
Above, is the content of nsx-v, we say quickly below NSX-MH, it in the overall architecture and nsx-v exactly the same, but the data plane components and implementation of different ways. The following diagram is the logic of the NSX-MH internal architecture.
In contrast to using VMware's own vsphare as a hypervisor solution (NSX-V, its architecture, how it works, and so on, we've explained in detail in the previous chapters), the NSX-MH architecture is completely consistent with its logical hierarchy-management plane, control plane, Data plane. The same components used in the management plane and the control plane, respectively, are NSX manager and NSX controller (where NSX controllers can use a clustered approach to deployment), and their biggest differences are only from the data plane. In Nsx-v, server virtualization software is vsphare, a physical server with Vsphare installed is called an ESXi host, enabling multiple virtual machines to run on top of them, which can be connected to each other through Vsphare distributed switches. Other components and features of NSX network virtualization (primarily logical switches, distributed logical routers, distributed firewalls, but not the features provided by NSX Edge) are built on top of vsphare distributed switches. In NSX-MH, the underlying virtualization platform may be vsphare or Xen or KVM, and the virtual machines installed above them are connected by the OvS (Open VSwitch), which was originally designed and developed by Nicira, and the NSX network virtualization logical Switch, The components and functions of distributed logical routers, distributed firewalls, etc. are built on the basis of OvS. On May 22, 2014, OvS announced support for running on top of the Hyper-V platform. Although NSX does not yet support hyper-V, the most important component under the NSX-MH architecture OvS announces this support, paving the way for future NSX support for Hyper-V-based virtualization platforms. In addition, NSX Edge in Nsx-v is replaced with a two-layer/three-layer gateway in NSX-MH for similar functionality.
The overall topology of the NSX-MH solution is as follows, and as we can see, the biggest difference compared to Nsx-v is that in NSX-MH, server virtualization may be built from either Xen or KVM, or two hybrid builds. Sometimes ESXi also appears in the architecture for deploying server virtualization, so that virtualization software in the datacenter may have both Xen, KVM, and ESXi. Instead of Vsphare distributed switches, the logical switch is built on OvS (where OvS used by ESXi has a dedicated name in the NSX environment, called NSX Vswitch,nvs).
By the way, in NSX-MH, the control plane and OvS interact in a way that Nicira developed OpenFlow.
Finally, briefly describe the integration of NSX and other manufacturers. Security and said, with third-party security manufacturers to achieve 5-7 layers of security, NSX firewall can only do 2-4 layers. With load balancing, F5 can deploy a distributed deployment in addition to the deployment of a single-arm mode, online mode. This function is very cow, and can use all the advanced features of F5.
Another important integration is the integration with Openstack,nsx-v and OpenStack, mainly using the release software of the VMware Integrated OpenStack (VIO). IT administrators can deploy OpenStack services simply, quickly and easily in existing vsphere. The integration of VMware with OpenStack enables the deployment of neutron over the network via NSX-V or VDS, with the use of NSX more than the VDS deployment. If the NSX-MH needs to be integrated with OpenStack, the Vio software is not required to deploy the neutron directly using NSX plugin.
A simple discussion from the SDN originator Nicira to the VMware NSX Network virtualization Platform