#ENABLED_SERVICES+=,q-lbaas ## Neutron - VPN as a Service #ENABLED_SERVICES+=,q-vpn ## Neutron - Firewall as a Service #ENABLED_SERVICES+=,q-fwaas # VXLAN tunnel configuration Q_PLUGIN=ml2 Q_ML2_TENANT_NETWORK_TYPE=vxlan # Cinder - Block Device Service ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch # Heat - Orchestration Service #ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng #IMAGE_URLS+=",
logical links, each of which corresponds to a path, perhaps through your physical links, in the underlying network. for example, distributed systems such as peer-to-peer networks and client-server applications are overlay networks because their nodes run on top of the Internet. the Internet was originally built as an overlay upon the telephone network, while today (through the advent of VoIP), the telephone network is increasingly turning into an overlay network built on top of the Internet.
An
Test environment:5 nodes ((controller,2 network,2 compute nodes))Using Vxlan+linux Bridge
Determine that all neutron and Nova services are running
Nova service-listNeutron agent-list2. Creation of 2 networksA) neutron net-create privateNeutron subnet-create–name private-subnet Private 10.0.0.0/29b) Neutron net-create private1Neutron subnet-create–name private1-subnet private1 10.0.1.0/293. Create a shared public network to connect to the
One, multi-host network requirementsTwo articles recommended before starting Http://xelatex.github.io/2015/11/15/Battlefield-Calico-Flannel-Weave-and-Docker-Overlay-Network/http ://mp.weixin.qq.com/s?__biz=mzawmdu1mte1oq==mid=400983139idx=1sn= f033e3dca32ca9f0b7c9779528523e7escene=1srcid=1101jklwco9jnfjdnuum85pgfrom=singlemessage Isappinstalled=0#wechat_redirectdocker in the 1.9 libnetwork team provides multi-host network capabilities to complete the overlay network. However, the network functio
came in on and would be received by VM A. VM A would unpack The ARP reply and find the MAC address which it queried about the source MAC address of the ARP header.Turning it onAssuming ML2 + OVS >= 2.1:
Turn on GRE or VXLAN tenant networks as you normally would
Enable L2pop
On the Neutron API node, in the Conf file you pass to the Neutron service (Plugin.ini/ml2_conf.ini):
[Ml2]mechanism_drivers = Openvswitch,l2popu
servers
Although the above network access optimization can achieve some scalability and manageability improvement, it is not enough to achieve the complete virtual network environment required by private cloud computing, on the one hand, virtual network resources and smart service functions required for Virtual Machine deployment are missing, and on the other hand, cloud computing must be able to automatically schedule virtualized resources as needed.
In a private cloud environment, a complete
container without worrying about breaking the connection between the containers.
Users can create containers in any order.
After the new networking features are clear, let's take a look at the implementation principle of this section.
The cross-host portion of the networking is implemented using the OvS (Open vSwitch) and Vxlan tunnels. With regard to the isolation between containers, iptables is used.
To understand networking's execution f
reliability of the network. OvS has proven to be very stable in the previous 1.5 OpenStack practice, with OvS Bridge using a bandwidth of 1G uplink, with less than 5% loss compared to physical machines.
Netplugin The original scheme is a flow table, each new container will add a flow, and all nodes are added, the size of the container is not conceivable. We remove this function to reduce complexity and improve stability. In addition, the OvS rate-limit function is introduced, and the container
about how the specific implementation is within the network.
5. Selection of technologies used for bridging Networks
The data center network already has a variety of solutions and protocols that separate network services from data forwarding, such as GRE, NvGRE, VPLS, VxLAN, and MACinIP, and TRILL, SPB and so on. In addition to the basic requirements of the network, we hope that the selected technologies and standards can:
1. simple, convenient, and
. logic architecture of Kubernetes Clusters
Before you deploy a Kubernetes cluster in detail, we will first show you the logical architecture of the cluster. It can be seen that the entire system is divided into two parts: the first part is Kubernetes APIServer, which is the core of the entire system and manages all containers in the cluster; the second part is minion, which runs Container Daemon, it is the place where all containers reside. At the same time, the Open vSwitch program is run on m
as shown in Figure 6. linuxbridge implements the Linux bridge, the openvswitch plug-in implements the openvswitch bridge, and the bigswitch plug-in implements an SDN controller, ml2 is a general plug-in. These L2 plug-ins are mainly divided into the plugin part of the database and the agent part running on the computing node. The fields written by plugin to the database are different but not many, so the code is repeated. ml2 can be understood as a public plugin ). Each plug-in basically implem
, this environment is as follows, a control node and a compute node650) this.width=650; "src=" Http://s1.51cto.com/wyfs02/M00/8C/2A/wKioL1hkffKToSySAACJQK_92G8629.png "title=" 1.png " alt= "Wkiol1hkffktosysaacjqk_92g8629.png"/>Above is the control node, three cardeno1777736 10.10.80.133 as an external networkeno33554960 10.10.10.130 as a management networkeno50332184 as a virtual machine networkThe supported network types have flat VLAN Vxlan GRE and
#ENABLED_SERVICES+=,q-lbaas ## Neutron - VPN as a Service #ENABLED_SERVICES+=,q-vpn ## Neutron - Firewall as a Service #ENABLED_SERVICES+=,q-fwaas # VXLAN tunnel configuration Q_PLUGIN=ml2 Q_ML2_TENANT_NETWORK_TYPE=vxlan # Cinder - Block Device Service ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch # Heat - Orchestration Service #ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng #IMAGE_URLS+=",
between pod and pod.There are many options for the underlying communication protocol between flannel, such as UDP, VXlan, AWS VPC, and so on. As long as the flannel can be reached on the end. Source flannel packet, the target flannel unpacking, the final Docker0 see is the original data, very transparent, there is no sense of the middle flannel.Flannel installation configuration on the Internet is a lot of talk, here is not to repeat. One thing to no
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.