How to use the metadata service when using an external physical router with an external DHCP service (by quqi99)

Source: Internet
Author: User

Zhang Hua posted: 2015-12-31
Copyright notice: Can be reproduced at will, please be sure to use hyperlinks in the form of the original source of the article and the author's information and this copyright notice(http://blog.csdn.net/quqi99)

Using an external physical router means that you do not use neutron-l3-agent, so you need to specify--router:external=true when defining the network

Neutron net-create phy_net-- --router:external=true --provider:network_type flat--provider: Physical_network Physnet1

The previous example uses the Flat network. It is required to configure bridge_mappings = Physnet1:br-phy (bridge_mapping is only valid for flat and VLANs).

do not configure the Bridge_mappings property to create a virtual opportunity failure (reported bind_failed error). Reported bind_failed error may also have a reason is that neutron.conf in the Agent_down_time set too small , causing the heartbeat feel it dead so that in bind can not find agent caused.

/etc/neutron/plugins/ml2/ml2_conf.ini:
[ML2]
Tenant_network_types = flat,Vlan,gre,vxlan
Type_drivers = Local,flat,vlan,gre,vxlan
Mechanism_drivers = Openvswitch
[OvS]
Bridge_mappings = physnet1:br-phy

The configuration parameters when installing with Devstack are:

Q_ml2_tenant_network_type=flat,vlan,gre,vxlan
Ovs_bridge_mappings=physnet1:br-phy


Now the only l3-agent and dhcp-agent provide metadata services.

Metadata namespace proxy identifies meta-data traffic from different tenant and is connected to the metadata agent via a UNIX socket. Metadata agent to Nova-metadata-api to pass some required HTTP headers proxy Access NOVA-METADATA-API services. Let's say we don't use the metadata service provided by L3-agent and Dhcp-agent. We wrote the program ourselves to have to do two things to identify namespace with passing HTTP headers.

So whether we want to use the metadata service or we should use the metadata service provided by L3-agent and Dhcp-agent .

The configuration of the metadata service provided with dhcp-agent such as the following:

enable_isolated_metadata = Trueenable_metadata_network = true[email protected]:~$ grep-r ' ^enable_ '/etc/ Neutron/l3_agent.ini Enable_metadata_proxy = False
In addition, the above Enable_isolated_metadata = True means that the network must be truly isolated, so a non-isolated network refers to a port that points to subnet. and the gateway IP of this port is the gateway IP of subnet. So there are three ways to deal with it:

  • Create a true isolated network using--no-gateway: Subnet-create net1 172.17.17.0/24 --no-gateway --name=sub1
  • Or do not create neutron router. Using external routes via--router:external=true: Neutron net-create phy_net-- --router:external=true --provider:network_ Type flat --provider:physical_network Physnet1
  • Configure the force to use the metadata service. force_metadata=true


The configuration of the metadata service provided with l3-agent, such as the following, does not use dhcp-agent to be able to stop the dhcp-agent process without configuring the following and Dhcp-agent configurations. But you need to set Dhcp_agent_notification=false to avoid dependencies):

Enable_metadata_proxy = True


Using external dhcpserver Today, we are able to use l3-agent to provide only metadata services, but normal L3 traffic still goes outside the router.

Neutron subnet-create--allocation-pool start=172.16.1.102,end=172.16.1.126 --gateway 172.16.1.2 phy_net 172.16.1.101/24--enable_dhcp=false --name=phy_subnet_without_dhcp

Then the virtual machine must be configured with a static route: metadata traffic goes neutron-l3-agent (as Neuton provides gateway:172.16.1.2), normal L3 traffic goes outside the router (such as the ip:172.16.1.1 of an external gateway). There are two ways to do this:

1, the static route is written to die in the mirror when mirroring.

2, using the static routing functionality provided by the external dhcpserver. Use DNAMSQ, for example. The configuration should be for example the following:

It is unclear why the following pair of OpenStack VMs must use qbr59bbcb56-86, and I use br-phy without success . But it was successful with it.
sudo ifconfig qbr59bbcb56-86 172.16.1.99/24
sudo dnsmasq--strict-order--bind-interfaces-i qbr59bbcb56-86--dhcp-range=set:tag0,172.16.1.100,172.16.1.109,2h-- Dhcp-optsfile=/home/demo/opts-d
[Email protected]:~/devstack$ route-n |grep qbr172.16.1.0 0.0.0.0 255.255.255.0 U 0 0 0 QB R59bbcb56-86[email protected]:~$ cat/home/demo/opts tag:tag0,option:classless-static-route,169.254.169.254/ 32,172.16.1.2,0.0.0.0/0,172.16.1.1TAG:TAG0,249,169.254.169.254/32,172.16.1.2,0.0.0.0/0,172.16.1.1TAG:TAG0, option:router,172.16.1.1

Ensure that the virtual machine agrees with the DHCP response from the 172.16.1.0/24 network segment:

-A neutron-openvswi-i59bbcb56-8-s 172.16.1.0/24-p udp-m UDP--sport--dport 68-j RETURN

-A neutron-openvswi-o59bbcb56-8-p udp-m UDP--sport--dport 67-m comment--comment "Allow DHCP client traffic."-j R Eturn


$ sudo udhcpc eth0
UDHCPC (v1.20.1) started
WARN: '/usr/share/udhcpc/default.script ' should not being used in Cirros. Replaced by CIRROS-DHCPC.
Sending Discover ...
Sending Select for 172.16.1.100 ...
Lease of 172.16.1.100 obtained, Lease time 7200
WARN: '/usr/share/udhcpc/default.script ' should not being used in Cirros. Replaced by CIRROS-DHCPC.
$

[Email protected]:~$ sudo dnsmasq--strict-order--bind-interfaces-i qbr59bbcb56-86--dhcp-range=set:tag0, 172.16.1.100,172.16.1.109,2H--dhcp-optsfile=/home/demo/opts-d
dnsmasq:started, version 2.68 cachesize 150
Dnsmasq:compile time Options:ipv6 gnu-getopt DBus i18n IDN DHCP DHCPv6 no-lua TFTP conntrack ipset Auth
DNSMASQ-DHCP:DHCP, IP range 172.16.1.100--172.16.1.109, lease time 2h
DNSMASQ-DHCP:DHCP, sockets bound exclusively to interface qbr59bbcb56-86
Dnsmasq:reading/etc/resolv.conf
dnsmasq:using nameserver 192.168.100.1#53
DNSMASQ:READ/ETC/HOSTS-5 addresses
Dnsmasq-dhcp:read/home/demo/opts
Dnsmasq-dhcp:dhcpdiscover (qbr59bbcb56-86) fa:16:3e:79:1e:2c
Dnsmasq-dhcp:dhcpoffer (qbr59bbcb56-86) 172.16.1.100 fa:16:3e:79:1e:2c
Dnsmasq-dhcp:dhcprequest (qbr59bbcb56-86) 172.16.1.100 fa:16:3e:79:1e:2c
Dnsmasq-dhcp:dhcpack (qbr59bbcb56-86) 172.16.1.100 fa:16:3e:79:1e:2c


Use sudo cirros-dhcpc up eth0 successfully.

$ sudo cirros-dhcpc up eth0
UDHCPC (v1.20.1) started


[Email protected]:~/devstack$ ping-c 1 172.16.1.100
PING 172.16.1.100 (172.16.1.100) bytes of data.
Bytes from 172.16.1.100:icmp_seq=1 ttl=64 time=0.400 ms
---172.16.1.100 ping statistics---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
RTT Min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms


[Email protected]:~$ ssh [email protected]
The authenticity of host ' 172.16.1.100 (172.16.1.100) ' can ' t be established.
RSA key fingerprint is fe:f2:85:fd:81:96:3c:94:78:a4:be:b0:41:59:ca:37.
Is you sure want to continue connecting (yes/no)? Yes
warning:permanently added ' 172.16.1.100 ' (RSA) to the list of known hosts.
[email protected] ' s password:
$ route-n
Kernel IP Routing Table
Destination Gateway genmask Flags Metric Ref use Iface
0.0.0.0 172.16.1.1 0.0.0.0 UG 0 0 0 eth0
169.254.169.254 172.16.1.2 255.255.255.255 UGH 0 0 0 eth0
172.16.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
$ IP Addr Show eth0
2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> MTU Qdisc pfifo_fast Qlen 1000
Link/ether fa:16:3e:79:1e:2c BRD FF:FF:FF:FF:FF:FF
inet 172.16.1.100/24 BRD 172.16.1.255 Scope Global eth0
Inet6 FE80::F816:3EFF:FE79:1E2C/64 Scope link
Valid_lft Forever Preferred_lft Forever


The virtual data traffic of the virtual machine goes through the above static route to the L3-agent node. The following iptables rule will lead the metadata traffic to the Neutron-ns-metadata-proxy process on port 9697. (If the dhcp-agent provides the metadata server, the port is 80, so there is no need for this rule).

So far. The process has been made.


[Email protected]:~$ sudo ip netns exec qrouter-05591292-1191-4f50-9503-215b6962aaec iptables-save |grep 9697
-A neutron-vpn-agen-prerouting-d 169.254.169.254/32-i qr-+-P tcp-m TCP--dport 80-j REDIRECT--to-ports 9697
-A neutron-vpn-agen-input-p tcp-m tcp--dport 9697-j DROP

[Email protected]:~$ ps-ef|grep Metadata
Demo 9648 9615 1 09:54 pts/15 00:01:28 python/usr/local/bin/neutron-metadata-agent--config-file/etc/neutron/ne Utron.conf--config-file=/etc/neutron/metadata_agent.ini
Demo 10057 1 0 09:54?

00:00:00/usr/bin/python/usr/local/bin/neutron-ns-metadata-proxy--pid_file=/opt/stack/data/neutron/external/ Pids/05591292-1191-4f50-9503-215b6962aaec.pid--metadata_proxy_socket=/opt/stack/data/neutron/metadata_proxy-- ROUTER_ID=05591292-1191-4F50-9503-215B6962AAEC--state_path=/opt/stack/data/neutron--metadata_port=9697-- metadata_proxy_user=1000--metadata_proxy_group=1000--verbose



How to use the metadata service when using an external physical router with an external DHCP service (by quqi99)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.