construction process is skipped, you can refer to the previous written version of the L build process, this environment is as follows, a control node and a compute node650) this.width=650; "src=" Http://s1.51cto.com/wyfs02/M00/8C/2A/wKioL1hkffKToSySAACJQK_92G8629.png "title=" 1.png " alt= "Wkiol1hkffktosysaacjqk_92g8629.png"/>Above is the control node, three cardeno1777736 10.10.80.133 as an external networkeno33554960 10.10.10.130 as a management networkeno50332184 as a virtual machine network
is not declared the old good nova-network will be used ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,neutron #VIF_PLUGGING_IS_FATAL=False#VIF_PLUGGING_TIMEOUT=10## Neutron - Load Balancing #ENABLED_SERVICES+=,q-lbaas ## Neutron - VPN as a Service #ENABLED_SERVICES+=,q-vpn ## Neutron - Firewall as a Service #ENABLED_SERVICES+=,q-fwaas # VXLAN tunnel configuration Q_PLUGIN=ml2 Q_ML2_TENANT_NETWORK_TYPE=vx
/network/subnets/172.30.92.0- -#说明, there are several subnets in a few node, there are several records, I am 2 node, respectively installed flannel plugin[Email protected]_node1 ~]# etcdctl--endpoints=${etcd_endpoints} \>--CA-FILE=/ETC/KUBERNETES/SSL/CA.PEM \>--CERT-FILE=/ETC/KUBERNETES/SSL/KUBERNETES.PEM \>--KEY-FILE=/ETC/KUBERNETES/SSL/KUBERNETES-KEY.PEM \> Get/kube-centos/network/config#输出{"Network": "172.30.0.0/16", "Subnetlen": "Backend": {"Type": "HOST-GW"}}#此处是查看主网络配置Etcdctl--endpoints=${
table to wrap the packets sent to it by the DOCKER0, and deliver the packets to the target Flanneld using the connection of the physical network, thus completing the direct address communication between pod and pod.There are many options for the underlying communication protocol between flannel, such as UDP, VXlan, AWS VPC, and so on. As long as the flannel can be reached on the end. Source flannel packet, the target flannel unpacking, the final Dock
Many new and recommended protocols have emerged for how to optimize the data center Ethernet and support its provision of server virtualization. Some of the protocols aim to achieve network virtualization by creating multiple virtual Ethernet networks that can share the same physical infrastructure. The sharing method is similar to that of multiple virtual machines sharing the same physical server.
Most protocols applicable to network virtualization basically use encapsulation and tunneling tech
,NW_ECN,NW_TTL,DL_VLAN,DL_VLAN_PCP,IP_FRAG,ARP_SHA,ARP_THA,IPV6_SRC,IPV6_DST, etc.;Support for mobile and output:port,mod_dl_src/mod_dl_dst,set field;Configuring Vxlan GRE and IP addressOvs-vsctl Add-port Br-ex Port--set interface port Type=vxlan options:remote_ip=192.168.100.3Ovs−vsctladd−port br-ex port−−set Interface Port Type=gre options:remote_ip=192.168.100.3Ovs−vsctladd−port Br-ex Port tag=10−−set In
the MAC of the VM originating the ARP request. if you carefully observe the explanation of the action part above, you wocould see that this is indeed the case.
Thus, the source MAC of the Ethernet frame wocould be B, the destination MAC. in the ARP header, the Source IP 10.0.0.2 and source mac B, while the destination IP 10.0.0.1 and destination MAC. this ARP reply will be forwarded back through the port which it came in on and will be released ed by vm. vm a will unpack the ARP reply and find
creating a network, and these 3 parameters are actually obtained indirectly through the configuration file.The Provider:network_type and provider:network_id mentioned above are good understandings of the corresponding network type (VLAN or Vxlan) and vid values, but Provider:physical_ What role does network play? It can be known that for a non-tunnel network (VLAN) VM after br-xx2 needs to be converted to a mapped network after the vid to communicate
] Configuration corresponds to the public and public physical network interfaces (in the environment the first NIC is the management network, the second network is the public network).650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M02/89/F7/wKiom1gi1lmj2LDdAAAu_eesUcw181.jpg-wh_500x0-wm_3 -wmp_4-s_1382507532.jpg "title=" qq20161109155410.jpg "alt=" Wkiom1gi1lmj2lddaaau_eesucw181.jpg-wh_50 "/>B.[vxlan] Disabling
not declared the old good nova-network will be used ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,neutron #VIF_PLUGGING_IS_FATAL=False#VIF_PLUGGING_TIMEOUT=10## Neutron - Load Balancing #ENABLED_SERVICES+=,q-lbaas ## Neutron - VPN as a Service #ENABLED_SERVICES+=,q-vpn ## Neutron - Firewall as a Service #ENABLED_SERVICES+=,q-fwaas # VXLAN tunnel configuration Q_PLUGIN=ml2 Q_ML2_TENANT_NETWORK_TYPE=vxlan
OVerlay networks.
Interpretation from wiki:
An overlay network is a computer network, which is built on the top of another network. nodes in the overlay can be thought of as being connected by virtual or logical links, each of which corresponds to a path, perhaps through your physical links, in the underlying network. for example, distributed systems such as peer-to-peer networks and client-server applications are overlay networks because their nodes run on top of the Internet. the Internet was
Test environment:5 nodes ((controller,2 network,2 compute nodes))Using Vxlan+linux Bridge
Determine that all neutron and Nova services are running
Nova service-listNeutron agent-list2. Creation of 2 networksA) neutron net-create privateNeutron subnet-create–name private-subnet Private 10.0.0.0/29b) Neutron net-create private1Neutron subnet-create–name private1-subnet private1 10.0.1.0/293. Create a shared public network to connect to the
and flannel, weave the bottom of the Vxlan way (including Docker1.9 itself is also using the implementation of Vxlan), are able to complete the overlay network. OvS and flannel and weave subdivision are the different ways to achieve the specific way. OvS is a relatively mature technology, many years of function is also very powerful, but the configuration is complex, for large-scale configuration and proje
;= 2.1:
Turn on GRE or VXLAN tenant networks as you normally would
Enable L2pop
On the Neutron API node, in the Conf file you pass to the Neutron service (Plugin.ini/ml2_conf.ini):
[Ml2]mechanism_drivers = Openvswitch,l2population
On all compute node, in the Conf file you pass to the OVS agent (Plugin.ini/ml2_conf.ini):
[agent]l2_population = True
Enable the ARP Responder:on each com
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.