Sesame HTTP: Ansible extension and sesame ansible Extension

Source: Internet
Author: User
Tags shebang ansible modules

Sesame HTTP: Ansible extension and sesame ansible Extension
Introduction to Ansible

 

Ansible is an O & M tool developed by Python. Because work requires access to Ansible, it often integrates some things into Ansible, so it is more and more familiar with Ansible.

So what is Ansible? In my understanding, I had to log on to the server and execute a bunch of commands to complete some operations. Ansible is used to execute those commands instead. In addition, you can use Ansible to control multiple machines and orchestrate and execute tasks on the machine, which is called playbook in Ansible.

How does Ansible do it? To put it simply, Ansible generates a script for the command we want to execute, then uploads the script to the server on which the command is to be executed through sftp, and then uses the ssh protocol, execute this script and return the execution result.

Then how does Ansible implement it? The following describes how Ansible executes a module from the module and plug-in.

PS: The following analysis is based on the specific use experience of Ansible and further draw execution conclusions by reading the source code. Therefore, we hope that when reading this article, is based on a certain understanding of Ansible, at least some concepts of Ansible, such as inventory, module, playbooks, etc.

 

Ansible Module

 

The module is the minimum unit of Ansible execution. It can be written in Python, Shell, or other languages. The module defines the specific operation steps and parameters required during actual use.

The executed script is to generate an executable script based on the module.

Then how does Ansible upload the script to the server and execute the script to obtain the result?

 

Ansible plug-in Connection plug-in

Connect to the plug-in, connect to the specified server based on the specified ssh parameter, and switch to the interface that provides the actual Command Execution

Shell plug-in

Command plug-in to generate the command to be executed for connection according to the sh type

Strategy plug-in

The execution policy plug-in is a linear plug-in by default, that is, the execution of a task followed by a task. This plug-in will throw the task to the executor for execution.

Action plug-in

The action plug-in is essentially all the actions of the task module. If the ansible module does not have an action plug-in specially compiled, it is normal or async by default (the two are selected based on whether the module is async ), the execution steps of the module are defined in normal and async. For example, you can create temporary files locally, upload temporary files, execute scripts, delete scripts, and so on. To add some special steps to all modules, you can add the action plug-in.

 

Ansible execution module Process

 

  • Find run in task_queue_manager.py
  • Expand Ansible instances

     

    Execution node Python environment Extension

    In practice, some of the Ansible modules we need to use third-party libraries, but installing these libraries on each node is not easy to manage. The essence of the ansible execution module is to execute the generated script in the python environment of the node. Therefore, we adopt the solution to specify the Python environment on the node and use a python environment in the LANNfsShare. By extending the Action plug-in, add nfs mounting on the node and unmount the nfs on the node after the execution is complete. The implementation steps are as follows:

    Extension code:

    Override the execute_module method of ActionBase

    # execute_modulefrom __future__ import (absolute_import, division, print_function)__metaclass__ = typeimport jsonimport pipesfrom ansible.compat.six import text_type, iteritemsfrom ansible import constants as Cfrom ansible.errors import AnsibleErrorfrom ansible.release import __version__try:    from __main__ import displayexcept ImportError:    from ansible.utils.display import Display    display = Display()class MagicStackBase(object):    def _mount_nfs(self, ansible_nfs_src, ansible_nfs_dest):        cmd = ['mount',ansible_nfs_src, ansible_nfs_dest]        cmd = [pipes.quote(c) for c in cmd]        cmd = ' '.join(cmd)        result = self._low_level_execute_command(cmd=cmd, sudoable=True)        return result    def _umount_nfs(self, ansible_nfs_dest):        cmd = ['umount', ansible_nfs_dest]        cmd = [pipes.quote(c) for c in cmd]        cmd = ' '.join(cmd)        result = self._low_level_execute_command(cmd=cmd, sudoable=True)        return result    def _execute_module(self, module_name=None, module_args=None, tmp=None, task_vars=None, persist_files=False, delete_remote_tmp=True):        '''        Transfer and run a module along with its arguments.        '''        # display.v(task_vars)        if task_vars is None:            task_vars = dict()        # if a module name was not specified for this execution, use        # the action from the task        if module_name is None:            module_name = self._task.action        if module_args is None:            module_args = self._task.args        # set check mode in the module arguments, if required        if self._play_context.check_mode:            if not self._supports_check_mode:                raise AnsibleError("check mode is not supported for this operation")            module_args['_ansible_check_mode'] = True        else:            module_args['_ansible_check_mode'] = False        # Get the connection user for permission checks        remote_user = task_vars.get('ansible_ssh_user') or self._play_context.remote_user        # set no log in the module arguments, if required        module_args['_ansible_no_log'] = self._play_context.no_log or C.DEFAULT_NO_TARGET_SYSLOG        # set debug in the module arguments, if required        module_args['_ansible_debug'] = C.DEFAULT_DEBUG        # let module know we are in diff mode        module_args['_ansible_diff'] = self._play_context.diff        # let module know our verbosity        module_args['_ansible_verbosity'] = display.verbosity        # give the module information about the ansible version        module_args['_ansible_version'] = __version__        # set the syslog facility to be used in the module        module_args['_ansible_syslog_facility'] = task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY)        # let module know about filesystems that selinux treats specially        module_args['_ansible_selinux_special_fs'] = C.DEFAULT_SELINUX_SPECIAL_FS        (module_style, shebang, module_data) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars)        if not shebang:            raise AnsibleError("module (%s) is missing interpreter line" % module_name)        # get nfs info for mount python packages        ansible_nfs_src = task_vars.get("ansible_nfs_src", None)        ansible_nfs_dest = task_vars.get("ansible_nfs_dest", None)        # a remote tmp path may be necessary and not already created        remote_module_path = None        args_file_path = None        if not tmp and self._late_needs_tmp_path(tmp, module_style):            tmp = self._make_tmp_path(remote_user)        if tmp:            remote_module_filename = self._connection._shell.get_remote_filename(module_name)            remote_module_path = self._connection._shell.join_path(tmp, remote_module_filename)            if module_style in ['old', 'non_native_want_json']:                # we'll also need a temp file to hold our module arguments                args_file_path = self._connection._shell.join_path(tmp, 'args')        if remote_module_path or module_style != 'new':            display.debug("transferring module to remote")            self._transfer_data(remote_module_path, module_data)            if module_style == 'old':                # we need to dump the module args to a k=v string in a file on                # the remote system, which can be read and parsed by the module                args_data = ""                for k,v in iteritems(module_args):                    args_data += '%s=%s ' % (k, pipes.quote(text_type(v)))                self._transfer_data(args_file_path, args_data)            elif module_style == 'non_native_want_json':                self._transfer_data(args_file_path, json.dumps(module_args))            display.debug("done transferring module to remote")        environment_string = self._compute_environment_string()        remote_files = None        if args_file_path:            remote_files = tmp, remote_module_path, args_file_path        elif remote_module_path:            remote_files = tmp, remote_module_path        # Fix permissions of the tmp path and tmp files.  This should be        # called after all files have been transferred.        if remote_files:            self._fixup_perms2(remote_files, remote_user)        # mount nfs        if ansible_nfs_src and ansible_nfs_dest:            result = self._mount_nfs(ansible_nfs_src, ansible_nfs_dest)            if result['rc'] != 0:                raise AnsibleError("mount nfs failed!!! {0}".format(result['stderr']))        cmd = ""        in_data = None        if self._connection.has_pipelining and self._play_context.pipelining and not C.DEFAULT_KEEP_REMOTE_FILES and module_style == 'new':            in_data = module_data        else:            if remote_module_path:                cmd = remote_module_path        rm_tmp = None        if tmp and "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp:            if not self._play_context.become or self._play_context.become_user == 'root':                # not sudoing or sudoing to root, so can cleanup files in the same step                rm_tmp = tmp        cmd = self._connection._shell.build_module_command(environment_string, shebang, cmd, arg_path=args_file_path, rm_tmp=rm_tmp)        cmd = cmd.strip()        sudoable = True        if module_name == "accelerate":            # always run the accelerate module as the user            # specified in the play, not the sudo_user            sudoable = False        res = self._low_level_execute_command(cmd, sudoable=sudoable, in_data=in_data)        # umount nfs        if ansible_nfs_src and ansible_nfs_dest:            result = self._umount_nfs(ansible_nfs_dest)            if result['rc'] != 0:                raise AnsibleError("umount nfs failed!!! {0}".format(result['stderr']))        if tmp and "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp:            if self._play_context.become and self._play_context.become_user != 'root':                # not sudoing to root, so maybe can't delete files as that other user                # have to clean up temp files as original user in a second step                tmp_rm_cmd = self._connection._shell.remove(tmp, recurse=True)                tmp_rm_res = self._low_level_execute_command(tmp_rm_cmd, sudoable=False)                tmp_rm_data = self._parse_returned_data(tmp_rm_res)                if tmp_rm_data.get('rc', 0) != 0:                    display.warning('Error deleting remote temporary files (rc: {0}, stderr: {1})'.format(tmp_rm_res.get('rc'), tmp_rm_res.get('stderr', 'No error string available.')))        # parse the main result        data = self._parse_returned_data(res)        # pre-split stdout into lines, if stdout is in the data and there        # isn't already a stdout_lines value there        if 'stdout' in data and 'stdout_lines' not in data:            data['stdout_lines'] = data.get('stdout', u'').splitlines()        display.debug("done with _execute_module (%s, %s)" % (module_name, module_args))        return data

    Integrated into normal. py and async. py. Remember to configure these two plug-ins in ansible. cfg.

    from __future__ import (absolute_import, division, print_function)__metaclass__ = type from ansible.plugins.action import ActionBasefrom ansible.utils.vars import merge_hash from common.ansible_plugins import MagicStackBase  class ActionModule(MagicStackBase, ActionBase):     def run(self, tmp=None, task_vars=None):        if task_vars is None:            task_vars = dict()         results = super(ActionModule, self).run(tmp, task_vars)        # remove as modules might hide due to nolog        del results['invocation']['module_args']        results = merge_hash(results, self._execute_module(tmp=tmp, task_vars=task_vars))        # Remove special fields from the result, which can only be set        # internally by the executor engine. We do this only here in        # the 'normal' action, as other action plugins may set this.        #        # We don't want modules to determine that running the module fires        # notify handlers.  That's for the playbook to decide.        for field in ('_ansible_notify',):            if field in results:                results.pop(field)         return results
    • Configure ansible. cfg to specify the extended plug-in as the action plug-in required by ansible.
    • Rewrite the plug-in method, focusing on execute_module
    • To run the command, you must specify the Python environment and add the required parameters to the nfs mount and unmount parameters.
      ansible 51 -m mysql_db -a "state=dump name=all target=/tmp/test.sql" -i hosts -u root -v -e "ansible_nfs_src=172.16.30.170:/web/proxy_env/lib64/python2.7/site-packages ansible_nfs_dest=/root/.pyenv/versions/2.7.10/lib/python2.7/site-packages ansible_python_interpreter=/root/.pyenv/versions/2.7.10/bin/python"

       

    Related Article

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.