Openstack Liberty Network Architecture Implementation Analysis (i)

Source: Internet
Author: User
Tags openvswitch

Before the Spring festival in the latest version of the OpenStack Liberty Network, a better introduction to the Liberty Network implementation and some new technology articles. After careful study decided to write the experience after the study.

This series of articles is divided into 7 sections, each of which describes:

1.Liberty Network Overview

2. Network Architecture

3.Neutron Configuration

4. Network creation

5. Add routes, networks, and subnets

6. Start a virtual machine on the tenant network

7. Booting the virtual machine in the flat network

Article connection: OpenStack Liberty Neutron Deployment (Part 1-7 Overview) http://www.opencloudblog.com/?p=557

Below I will also be divided into 7 parts to write their own reading and experiment after some experience.

1. Liberty Network Overview

The overall architecture of nodes, networks, and services can be seen from the architecture diagram, with a few more important points:

1. In the traditional three node (control, network, computing) environment, add the API node, API node exposed to the external network, to ensure that the control node is not directly external, in order to improve the security of the system.

2. The following elements are involved in the network:

1) intranet and external network two routers, intranet routers simply run management data, to ensure communication between the nodes.

2) Extranet router includes API access network, virtual machine network and Vxlan network

3. The distribution of services run by each node is consistent with the traditional OpenStack, but the Nova-metadata metadata service, which moves from the control node to the network node, reduces the pressure on the network of virtual machine access metadata information.

4. Virtual network resources provided for tenants

1) Each tenant can add router hooks floating pool floating IP pool network and tenant intranet to implement virtual machine access extranet

2) provide tenants with multiple floating IP pool networks, and the gateway to a floating IP pool network can be a physical router

3) to provide tenants with multiple flat networks, flat network provides physical router, only virtual machines can connect to this network.

2. Network Architecture

Based on the above node and network Diagram, you can find:

1) Two floating networks using vlan100 and Vlan101,snat and Dnat via iptables routing

2) Two flat networks using vlan200 and vlan201, virtual machines do not require iptables routing

3) Vxlan Network use vlan4000

In the use of Vxlan and GRE tunnel mode, because the IP and UDP packets will be added to the tunnel header information, resulting in more than 1500 message MTU, the traditional solution through the DHCP issued mtu=1400 configuration to the virtual machine, reduce the MTU value of the virtual machine to achieve a guaranteed message less than 1500, Due to the problem caused by the production environment, the author does not recommend reducing the MTU value, the author recommends modifying the switch MTU to 1600 to ensure the forwarding of network packets.

Specific VLAN configuration information:

VLAN name FLOATING-POOL-1

VLAN 101 Name Floating-pool-2

VLAN-Name Flat-net-1

VLAN 201 Name Flat-net-2

# Set the MTU to-4000 for VLAN

VLAN 4000 name Vxlan MTU 1600

# do not use VLANs 1 for untagged packets

VLAN 4090 name Native VLAN

#

############

#

Interface VLAN 100

Description Floating-network-1

IP address 198.18.0.1/20

Interface VLAN 101

Description Floating-network-2

IP address 198.18.16.1/20

Interface VLAN 200

Description Flat-network-1

IP address 198.19.1.1/24

Interface VLAN 201

Description Flat-network-2

IP address 198.19.2.1/24

# a L3 interface for the Vxlan VLAN could be added

#

#############

#

# The ports to the nodes (Network and compute)

# They use the same config!

# just one link to each node-multiple links using LACP could also be used

Interface Port1

Description To-network-node

Mode Trunk

Trunk Native VLAN 4090

Trunk VLAN 100,101,200,201,4000,4090

MTU 1600

Interface Port2

Description To-compute-node

Mode Trunk

Trunk Native VLAN 4090

Trunk VLAN 100,101,200,201,4000,4090

MTU 1600

Network Node Configuration

Network nodes and compute nodes are configured using a physical port or a binding port to realize the transfer of virtual machine traffic is a very innovative way to configure. If you follow the official OpenStack deployment scenario, you may need four physical network cards to support this network configuration, but according to the author's scheme, can be implemented through a physical network port to achieve the functions of these four ports, which will greatly reduce the complexity of the network, but also a good solution.

1) The physical port of the Vxlan can be unnecessary, because the Vxlan transfer requires an IP layer address, you can create internal port (L3vxlan) on Br-uplink, and configure the port with an IP address to enable the transmission of Vxlan tunneling information.

2) Br-uplink is a OvS bridge built on a physical network port such as Eth1, which realizes the uplink function of tunnel network Br-tun and VLAN network Br-vlan.

3) Br-uplink and its patches to br-int and Br-vlan require the user to create their own, Br-tun, Br-vlan, and Br-int Bridge devices are created and maintained by OpenStack code.

4) Traditional OpenStack scheme two floating external network pools correspond to two L3 agents and allocate two physical network cards to configure Br-ex external bridge. But actually L3 agent is now smart enough to use any bridge device. Any one L3 agent can manage multiple floating pools.

5) Liberty Neutron code does not have to use Br-ex to implement the L3 agent's routing function

Network node and Compute node virtual device configuration:

#

# The bridge, which connects the nodes to the transport network

Ovs-vsctl ADD-BR Br-uplink

# The bridge used by Openstack Neutron to connect VLANs and FLATDHCP networks

Ovs-vsctl ADD-BR Br-vlan

# The integration bridge used by Openstack

Ovs-vsctl ADD-BR Br-int

#

# Add the uplink (with dot1q tags 101,102,...)

# We assume, that's eth1 is the uplink interface

IP link set Dev eth1 up

# Set the MTU of the physical uplink to the switch

IP link set dev eth1 MTU 1600

#

# Disable GRO and LRO!! On the uplink

Ethtool-k eth1 Gro off

Ethtool-k eth1 LRO off

#

# Enable for Intel NICs UDP port hashing to distribute traffic to different queues

Ethtool-n eth1 Rx-flow-hash UDP4 SDFN

#

Ovs-vsctl add-port Br-uplink eth1--set port eth1 vlan_mode=trunk trunk=100,101,200,201,4000

#

# patch ports between Br-uplink and Br-vlan

Ovs-vsctl add-port Br-vlan Patch-to-uplink--set Interface patch-to-uplink type=patch Options:peer=patch-to-vlan

Ovs-vsctl add-port Br-uplink Patch-to-vlan--set Interface Patch-to-vlan type=patch Options:peer=patch-to-uplink

#

# !! On Br-uplink the Allowed VLAN tags on the patch ports from Br-vlan must be filtered using Openflow rules

# !! If this isn't done, there are a risk that VLANs from the infrastructure could get mixed with local VLANs

# !! of Br-int, if the neutron Openvswitch agent fails to set up the VLAN mapping on Br-vlan or Br-int

# TBD

###

# Create the Linux IP interface required for VXLAN transport

# This interface was attached to VLAN 4000 of Br-uplink

# XXX = Last octet of the VXLAN interface IP address of the node

Ovs-vsctl add-port br-uplink L3vxlan tag=4000--set Interface L3vxlan type=internal

IP addr Add 10.255.255.xxx/24 dev L3vxlan

IP link set Dev L3vxlan up

# Set the MTU of the logical Vxlan interface

IP link set dev L3vxlan MTU 1600

3.Neutron Configuration

About neutron configuration items can refer to the original configuration, say a few more important items:

1) in Ml2_conf.ini:

###>>>>>>>>> local_ip is only used on compute and Network nodes # # #

# local_ip = <ip address of the L3vxlan interface>

LOCAL_IP the IP address of the internal port to be set to L3vxlan Openvswitch here, to implement the Vxlan tunnel IP

2) in the L3_agent.ini

#

# very Important-set The following entries to an empty string

# do not leave the default values

gateway_external_network_id =

External_network_bridge =

Traditional L3 external network needs to be configured Br-ex external Network bridge, because Liberty Network L3 agnet can not specify an external network bridge, the external network IP QG virtual device can be established in the network node Br-int integrated Bridge, in order to achieve L3 routing Snat and Dnat.

METADATA_IP = 127.0.0.1

Traditional OpenStack Nova-metadata services are deployed in the control node, but here the metadata service is deployed in the network node, so METADATA_IP needs to be configured as 127.0.0.1

3) in the nova-metadata.conf

Metadata_host = 127.0.0.1

Metadata_listen = 127.0.0.1

This metadata service only listens on local 127.0.0.1

The above steps basically completed the Liberty Network environment preparation work, the next part will introduce network, routing, extranet configuration and virtual machine creation.

Author profile: Zhao Junfeng, is now Beijing New Cloud Oriental System Technology Co., Ltd. Cloud Computing department OpenStack Development engineer. Mainly engaged in power and x86 mixed environment OpenStack related computing, network, storage-related services software development and system architecture design work.

Openstack Liberty Network Architecture Implementation Analysis (i)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.