OpenStack Neutron LoadBalance Source parsing (ii)

Source: Internet
Author: User
Tags haproxy

Statement:

This blog welcome reprint, but please keep the original author information, and please specify the source: http://write.blog.csdn.net/!

Lin Kai

Team: Huawei Hangzhou OpenStack Team


In neutron loadbalance source parsing (a), we have learned that when tenants create pool, member, HealthMonitor, and VIPs, the code calls Haproxynsdriver functions in create_xxx. So when a tenant uses pool to create a VIP, that is, when the code calls the CREATE_VIP function in Haproxynsdriver, the basic deployment of Neutron LoadBalancer is done, what exactly does CREATE_VIP do with this method? Together we continue to look down:


Haproxynsdriver in the CREATE_VIP:
def create_vip (self, VIP):        self._refresh_device (vip[' pool_id ')


# get the configuration and then deploy the instance according to the configuration Def _refresh_device (self, pool_id): Logical_config = Self.plugin_rpc.get_logical_device (pool_id)        self.deploy_instance (Logical_config)

def deploy_instance (self, Logical_config):        # Does actual deploy only if VIP and pool is configured and active        # if V The IP and pool are configured correctly and for active        if (not logical_config or ' VIPs ' not in                logical_config or                (logical_config[' VIP ') ' [' Status '] not in                 constants. active_pending_statuses) or not                logical_config[' VIP ' [' admin_state_up '] or                (logical_config[' pool '] [' Status '] not in                 constants. active_pending_statuses) or not                logical_config[' pool ' [' admin_state_up ']):            return        if Self.exists ( logical_config[' pool ' [' ID ']):            self.update (logical_config)        else:            self.create (Logical_config)



We can see that the real action is performed only when the VIP and pool are properly configured and the status is active. The pool was not actually established at this time, so the create operation was performed.
def create (self, logical_config):        pool_id = logical_config[' pool ' [' ID ']        namespace = get_ns_name (pool_id)        Self._plug (namespace, logical_config[' VIP ' [' Port '])        self._spawn (logical_config)


Here are two important operations: Plug and spawn, respectively:
EF _plug (Self, namespace, port, reuse_existing=true): # Update the information for the port created when the DB VIP was created Self.plugin_rpc.plug_vip_port ( port[' id ']) interface_name = Self.vif_driver.get_device_name (Wrap (port)) #判断在命名空间是否存在设备, if present, skip #不存在这个设 Plug if ip_lib.device_exists (Interface_name, Self.root_helper, namespace) using Vif_driver: If not reuse_ Existing:raise exceptions. Preexistingdevicefailure (Dev_name=interface_name) Else:self.vif_dri Ver.plug (port[' network_id '), port[' id '], interface_name, port                       [' mac_address '], namespace=namespace) Cidrs = ['%s/%s '% (ip[' ip_address '], Netaddr. Ipnetwork (ip[' subnet ' [' CIDR ']). Prefixlen) for IP in port[' Fixed_ips ']] # Set the L3 for the NIC to initialize self . VIF_DRIVER.INIT_L3 (Interface_name, Cidrs, Namespace=namespace) gw_ip = port[' fixed_ips '][0][' subnet '].get (' gateway_ip ') # with gateway IP and no gateway IP respectively if not gw_ip:host_rout es = port[' fixed_ips '][0][' subnet '].get (' host_routes ', []) for Host_route in Host_routes:if hos  t_route[' destination '] = = "0.0.0.0/0": gw_ip = host_route[' nexthop '] break if GW_IP: # Set default gateway IP cmd = [' route ', ' Add ', ' Default ', ' GW ', gw_ip] ip_wrapper = ip_lib. Ipwrapper (Self.root_helper, Namespace=namespace) ip_wrapper.netns.exec Ute (CMD, check_exit_code=false) # When Delete and re-add the same VIP, we need to # Send gratuitous            ARP to flush the ARP cache in the Router. # when deleting or re-adding the same VIP, we need to send gratuitous ARP Gratuitous_arp = Self.conf.haproxy.send_gratuitous_arp if Grat Uitous_arp > 0:for IP in port[' fixed_ips ']: cmd_arping = [' Arping ', '-u ', '-I ', interface_name, '-C ', gratuitous_a RP, ip[' IP_Address '] ip_wrapper.netns.execute (cmd_arping, Check_exit_ Code=false)




From the above code, when there is no network equipment, to use the vif_driver to plug, this plug operation completed what things? Because the default driver is Ovsinterfacedriver, the plug method of Ovsinterfacedriver is found in/agent/linux/interface.py:
def plug (self, network_id, port_id, Device_name, mac_address, Bridge=none, Namespace=none, Prefix=none):        "" "Plug in the interface." "" "                                    If not Bridge:bridge = Self.conf.ovs_integration_bridge if not ip_lib.device_exists (Device_name, Self.root_helper, Namespace=namespace): self.ch Eck_bridge_exists (bridge) IP = ip_lib. Ipwrapper (self.root_helper) # Acquiring device information, using a tap device or Veth device # Tap is a device that lets the user program inject data into the kernel protocol stack # VET The H function is to reverse the direction of the communication data, the data that needs to be sent is converted into the data to be received # re-fed to the kernel network layer for processing, thus indirectly completing the data injection tap_name = Self._get_tap_name (device                _name, prefix) if Self.conf.ovs_use_veth: # Create Ns_dev in a namespace if one is configured.                                               Root_dev, Ns_dev = Ip.add_veth (Tap_name, Device_name, Namespace2=namespace)            Else:ns_dev = Ip.device (device_name) # If Br-int is not a problem, create a network card and attach to Br-int            # If you are using a veth device, do not add the NIC to the internal type interface.                               Internal = not Self.conf.ovs_use_veth Self._ovs_add_port (bridge, Tap_name, port_id, Mac_address, internal=internal) # set Mac_address # IP link set tap452bdfab-31 address fa:16:3e for the network card:                d7:08:67 ns_dev.link.set_address (mac_address) # Sets the MTU if SELF.CONF.NETWORK_DEVICE_MTU:                    NS_DEV.LINK.SET_MTU (SELF.CONF.NETWORK_DEVICE_MTU) if Self.conf.ovs_use_veth:            ROOT_DEV.LINK.SET_MTU (SELF.CONF.NETWORK_DEVICE_MTU) # Add an interface created by OvS to the namespace. # Place the new NIC in this namespace if not self.conf.ovs_use_veth and namespace:namespace_obj = IP.E            Nsure_namespace (namespace) namespace_obj.add_device_to_namespace (Ns_dev)# set the NIC to up NS_DEV.LINK.SET_UP () if SELF.CONF.OVS_USE_VETH:ROOT_DEV.LINK.SET_UP () Else:LOG.info (_ ("Device%s already exists"), device_name)



The plug operation here understands that LB devices need to be connected to the switch (referred to as the OVS Virtual Switch: br-int) to be able to communicate with the network and work properly. To be with the network, we need to add a network card for the device, and attach the network card to the switch, and then configure the correct, these devices to work properly.
You can also see in the code that you put the NIC in namespace, what is namespace used for? Interesting to look at: http://blog.csdn.net/preterhuman_peak/article/details/40857117, in short: namespace provides a container, It provides a completely separate view of the network stack for multiple processes. Including network device interface, IPV4 and IPV6 protocol stack, IP routing table, firewall rules, sockets and so on. So the NIC is put into the namespace, the network device interface is perceived by the process and can be used.
After the plug operation, you also need to L3 the network card initialization, and set the default gateway IP, here we have a lot of people have doubts: Why do these settings? This is related to the framework service insertion that implements L4/L7 layer services in neutron.
Neutron also implemented a layer called "service" plug-in structure, that now has two layers of plug-in structure, on the neutronplugin can start multiple services, Neutronplugin's service plug-in continues to interact with the database while also sharing the original Neutronplugin information, such as port information, while the Fwaas service requires serviceagent to run on the network node where the l3-agent resides. The Lbaas Haproxy does not need to be installed in l3-agent, but L3-agent should also create a port in a dedicated namespace and the host where the Haproxy resides is Unicom.

So the Lbaas service does not need to be running on the network node, but the port that is specially prepared for it on the network node and the node that actually runs the haproxy should pass, and the port and gateway should also pass, that is, the so-called Floating/in-path mode. Therefore, the above operation and configuration.



Here the _plug operation has been completed, and then there is a pit is not filled, that is, what is done in the _spawn method?

def _spawn (self, Logical_config, extra_cmd_args= ()):        pool_id = logical_config[' pool ' [' ID ']        namespace = get_ Ns_name (pool_id)        Conf_path = Self._get_state_file_path (pool_id, ' conf ')        Pid_path = Self._get_state_file_ Path (pool_id, ' pid ')        Sock_path = Self._get_state_file_path (pool_id, ' sock ')        User_group = Self.conf.haproxy.user_group        hacfg.save_config (Conf_path, Logical_config, Sock_path, user_group)        cmd = [' Haproxy ', '-f ', Conf_path, '-P ', Pid_path]        cmd.extend (extra_cmd_args)        ns = Ip_lib. Ipwrapper (Self.root_helper, namespace)        ns.netns.execute (cmd)        # Remember the pool<>port mapping        self.pool_to_port_id[pool_id] = logical_config[' VIP ' [' Port '] [' ID ']



The action here is to generate a new configuration file based on the previous configuration file and device information, which is written to haproxy to take effect. This way the entire Lbaas can use Haproxy for load balancing.
At this point, LBaaS V1.0 's module parsing and source code analysis will be over.

OpenStack Neutron LoadBalance Source parsing (ii)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.