Neutron-server Initializing-RPC Service initialization

Source: Internet
Author: User
Tags message queue rabbitmq
Object Relationship Diagram


1. Server side : Message Queue monitoring will be opened when the service starts, such as Cinder-volume startup, will start Messagehandlingserver (described below), To listen to the message and dispatch the message to the manager method of the distance to do the message processing. RPC request processing is the responsibility of the Sever role, taking cinder as an example, CINDER-API is the request initiator, Client, requesting that the request be sent by using the RPC method to the Cinder-scheduler,cinder-scheduler responsible for processing the request is Ser Ver The Server is responsible for processing the request, first creating the Consumer. Cinder-scheduler Server created Topic Consumer and Cinder-scheduler fanout Consumer for Cinder-scheduler and Cinder-scheduler:host, Not be used to receive different types of messages. When Consumer is created, the corresponding Message Queue is created on the QPID server and the Routing Key is declared bound to Exchange. When the Cinder-scheduler service starts, the Cinder-scheduler Manager is registered in order to rpc_dispatcher the callback object that the Client sends the request eventually there will be Cinder-scheduler man The Ager object invokes execution. Start the consumer thread, accept the message on the queue, and when the thread receives the message on the queue, pass the message to Rpc_dispatcher, the callback object, and the callback object calls the corresponding handler function based on the message content, such as create _volume, and processes the request.



2. Client Side : Responsible for message issuance, method call code is in the specific API, such as the volumeapi of this example, is generally stored in the rpcapi.py.



For a detailed explanation, draw the overall object dependency graph first:


Serve_rpc function




The most important job of the SERVE_RPC function is to start the rpcworker of each plug-in.
neutron/neutron/service.py
1. SERVE_RPC ()



def serve_rpc():
    plugin = manager.NeutronManager.get_plugin()

    if cfg.CONF.rpc_workers < 1:
        cfg.CONF.set_override('rpc_workers', 1)

    # If 0 < rpc_workers then start_rpc_listeners would be called in a
    # subprocess and we cannot simply catch the NotImplementedError.  It is
    # simpler to check this up front by testing whether the plugin supports
    # multiple RPC workers.
    if not plugin.rpc_workers_supported():
        LOG.debug("Active plugin doesn't implement start_rpc_listeners")
        if 0 < cfg.CONF.rpc_workers:
            LOG.error(_LE("'rpc_workers = %d' ignored because "
                          "start_rpc_listeners is not implemented."),
                      cfg.CONF.rpc_workers)
        raise NotImplementedError()

    try:
        rpc = RpcWorker(plugin)

        # dispose the whole pool before os.fork, otherwise there will
        # be shared DB connections in child processes which may cause
        # DB errors.
        LOG.debug('using launcher for rpc, workers=%s', cfg.CONF.rpc_workers)
        session.dispose()
        launcher = common_service.ProcessLauncher(cfg.CONF, wait_interval=1.0)
        launcher.launch_service(rpc, workers=cfg.CONF.rpc_workers)
        return launcher
    except Exception:
        with excutils.save_and_reraise_exception():
            LOG.exception(_LE('Unrecoverable error: please check log for '
                              'details.'))


2. Rpcworker class:



class RpcWorker(worker.NeutronWorker):
    """Wraps a worker to be handled by ProcessLauncher"""
    def __init__(self, plugin):
        self._plugin = plugin
        self._servers = []

    def start(self):
        super(RpcWorker, self).start()
        self._servers = self._plugin.start_rpc_listeners()

    def wait(self):
        try:
            self._wait()
        except Exception:
            LOG.exception(_LE('done with wait'))
            raise

    def _wait(self):
        LOG.debug('calling RpcWorker wait()')
        for server in self._servers:
            if isinstance(server, rpc_server.MessageHandlingServer):
                LOG.debug('calling wait on %s', server)
                server.wait()
            else:
                LOG.debug('NOT calling wait on %s', server)
        LOG.debug('returning from RpcWorker wait()')

    def stop(self):
        LOG.debug('calling RpcWorker stop()')
        for server in self._servers:
            if isinstance(server, rpc_server.MessageHandlingServer):
                LOG.debug('calling stop on %s', server)
                server.stop()

    @staticmethod
    def reset():
        config.reset_service()
Process Analysis


First, the plugin is loaded based on the configuration file Core_plugin, then the Rpcworker is created, the RPC is started, and the listener is called _plugin.start_rpc_listeners.



Take ML2 plugin as an example, in its Start_rpc_listener method, create an instance of the Neutron.plugin.ml2.rpc.RpcCallbacks class, and create a dispatcher handle ' q-plugin ' Topic ML2 Plugin File:



neutron/plugins/ml2/plugin.py
_setup_rpc function in the Start_rpc_listener method to create an instance of the Neutron.plugin.ml2.rpc.RpcCallbacks class



    def _setup_rpc (self): "" "Initialize the components to the support
        agent communication.
        " " self.endpoints = [
            RPC. Rpccallbacks (Self.notifier, Self.type_manager),
            Securitygroups_rpc. Securitygroupserverrpccallback (),
            Dvr_rpc. Dvrserverrpccallback (),
            Dhcp_rpc. Dhcprpccallback (),
            agents_db. Agentextrpccallback (),
            Metadata_rpc. Metadatarpccallback (),
            Resources_rpc. Resourcespullrpccallback ()
        ]


Created a dispatcher processing topic = ' q-plugin ' to subscribe to the ML2 Agent message Queue message (consumer, consumer) (topics. PLUGIN) in order to receive RPC requests from the agent.



    def start_rpc_listeners (self): "", start the RPC loop to let the
        plugin communicate with agents.
        "" " SELF._SETUP_RPC ()
        self.topic = topics. PLUGIN
        self.conn = n_rpc.create_connection (new=true)
        Self.conn.create_consumer (Self.topic, self.endpoints , Fanout=false)
        return self.conn.consume_in_threads ()


When Ml2plugin is initialized , it creates its own message queue (notify, producer) for the agent (topics. Agent) to send an RPC request to the agent while subscribing to the ML2 Agent message Queue message (consumer, consumer) (topics. PLUGIN) in order to receive RPC requests from the agent. Same



When the ML2 agent is initialized , it also creates its own message queue (notify, producer) (topics. PLUGIN) to send an RPC request to PLUGIN, while subscribing to Ml2plugin Message Queuing message (consumer, consumer) (topics. AGENT) to receive RPC requests from plugin.



The Generator Class (XXXXNOTIFYAPI) of the message queue and the corresponding consumer class (Xxxxrpccallback) define the same interface functions, and the function in the producer class is the main function of the RPC call to the same name function in the consumer class. The function in the consumer class performs the actual action. For example: the Network_delete () function is defined in the Xxxnotifyapi class, and the Network_delete () function is also defined in the Xxxrpccallback class. Xxxnotifyapi::network_delete () calls the Xxxrpccallback::network_delete () function via RPC, Xxxrpccallback::network_delete () Perform the actual network delete delete action.



Creator Class (XXXXNOTIFYAPI):
/neutron/neutron/agent/rpc.py



    def update_device_up (self, context, device, agent_id, Host=none):
        cctxt = Self.client.prepare ()
        return Cctxt.call (context, ' update_device_up ', Device=device,
                          agent_id=agent_id, Host=host)


Consumer Class (Xxxxrpccallback):
/neutron/neutron/plugins/ml2/rpc.py



      """Device is up on agent."""
        agent_id = kwargs.get('agent_id')
        device = kwargs.get('device')
        host = kwargs.get('host')
        LOG.debug("Device %(device)s up at agent %(agent_id)s",
                  {'device': device, 'agent_id': agent_id})
        plugin = manager.NeutronManager.get_plugin()
        port_id = plugin._device_to_port_id(rpc_context, device)
        if (host and not plugin.port_bound_to_host(rpc_context,
                                                   port_id, host)):
            LOG.debug("Device %(device)s not bound to the"
                      " agent host %(host)s",
                      {'device': device, 'host': host})
            return

        port_id = plugin.update_port_status(rpc_context, port_id,
                                            n_const.PORT_STATUS_ACTIVE,
                                            host)
        try:
            # NOTE(armax): it's best to remove all objects from the
            # session, before we try to retrieve the new port object
            rpc_context.session.expunge_all()
            port = plugin._get_port(rpc_context, port_id)
        except exceptions.PortNotFound:
            LOG.debug('Port %s not found during update', port_id)
        else:
            kwargs = {
                'context': rpc_context,
                'port': port,
                'update_device_up': True
            }
            registry.notify(
                resources.PORT, events.AFTER_UPDATE, plugin, **kwargs)


The methods in the Rpccallbacks class are corresponding to the Neutron.agent.rpc.PluginApi method .



Reference:
RABBITMQ Basic Concept Detailed description: http://blog.csdn.net/whycold/article/details/41119807



Oslo_messaging components: http://blog.csdn.net/gj19890923/article/details/50278669
This article mainly introduces the creation of Rpc-server and Prc-client, as well as the remote invocation of cast and call.
You should know the RPC principle: http://blog.jobbole.com/92290/
RPC Communication principle: http://www.ibm.com/developerworks/cn/cloud/library/1403_renmm_opestackrpc/index.html
Rabbit Official website: https://www.rabbitmq.com/tutorials/tutorial-one-python.html


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.