Introduction to the Python Automation deployment tool Fabric

Source: Internet
Author: User
Tags ssh access git clone



Introduction to Automation Deployment Tools fabric






Fabric is an automated deployment tool that helps us reduce repetitive/cumbersome operations on-line, and it is necessary for operations or developers of many small companies that lack a mature operations platform.


1. What is fabric?


The official fabric documentation is described below:
Fabric is a Python (2.5-2.7) library and command-line tool for streamlining the use of SSH for application deployment or S Ystems administration tasks.
Specifically, fabric is a python library that can be remotely manipulated using fabric as long as the target machine supports SSH access (such as running shell commands remotely on the host1 for Host2), apparently because fabric is a python package, So other Python The package can be import into the fabric-specific fabfile.py script, which enables fabric to be more powerful and more maintainable than the shell-enabled automated deployment scripts, not to mention the huge amount of manual knock-out commands. Advantage.



In the area of system operations and deployment automation, there are a lot of tools similar to fabric (such as puppet, Chef), which you can refer to in this article for an introduction to the best Cloud tools for Infrastructure automation.



The installation of fabric is very convenient, pip install fabric can be done, here do not repeat.


2. Supported operations by Fabric


Common commands supported by fabric are listed below:
1) Local Run a command on the local system.
It is the encapsulation of the Subprocess module (shell=true) that can capture its execution results by setting capture = True/false.
2) Run run a shell command on a remote host.
The return value of this command contains information such as whether the remote command executed successfully and the return code of the remote command. When executing a command through run, it is usually required to enter the target machine password, if multiple machines are deployed, you can set Env.passwords to avoid manually entering the password, the specific setting method will be described in the next note.
3) Get Download One or more files from a remote host.
4) put Upload one or more files to a remote host.
5) sudo Run a shell command on a remote host with superuser privileges.
The function is similar to the run operation, which can temporarily power the current user to perform certain commands that require root privileges.



In addition, there are some less commonly used commands (such as: prompt, reboot, Open_shell, require) are not listed here, interested, you can refer to the fabric operations documentation.



It is important to note that when a fabric performs a remote task through run or sudo, it creates a new SSH connection each time, that is, there is no coupling between the tasks, so when you implement a task that requires multiple steps, you need to put multiple commands on the same line, separated by commas between commands. Examples are described below:
Assuming that the test directory is created after the CD to the/HOME/WORK/TMP directory on the remote machine, the following command does not achieve the intended purpose:

run (‘cd / home / work / tmp’)
run (‘mkdir test’) ## The second run will re-create the ssh connection and will not remember the directory where the cd was last !! 12
Need to use the following command to achieve:

run (‘cd / home / work / tmp; mkdir test’) 1
run (‘cd / home / work / tmp && mkdir test’) 1


Of course, you can also use the context manager provided by the fabric to achieve:

with cd (‘/ home / work / tmp’):
    run (‘mkdir test’) 12


The meta-operations supported by the fabric are introduced above, so how to implement complex functions based on these operations?
In the fabric, a group of operations with logical relationships are usually encapsulated into a task. The fabric executes commands at the granularity of the task. The following section describes how to define a task.

3. Define what tasks3.1 fabfile is in fabfile
 According to the convention of the fabric, when running a command such as "fab deploy", fab will search for a python file named fabfile.py or a package named fabfile by default, so the deployment script based on fabric is usually named fabfile.py and should be It is located in the current working directory so that fab can search, and the desired task can be achieved in this file. Of course, if the deployment tasks to be implemented are more complex, these tasks can also be written in multiple scripts and placed under the fabric package. For details of fabfile, you can refer to the official document Fabfile construction and use.

3.2 Define task
In terms of grammatical conventions, fabric has two ways to define tasks:
1) Classic method
All callable objects (such as functions and classes) defined in fabfile can be executed as tasks by fab. This method does not support nesting, that is: if other modules are imported in fabfile.py, even these modules Callable objects are defined in, these callable objects that are not directly defined in fabfile will not be regarded as fab task.

Below is an example of a task defined in classic way (taken from Fabric Overview and Tutorial):

from fabric.api import localdef prepare_deploy ():
    local ("./ manage.py test my_app")
    local ("git add -p && git commit")
    local ("git push")
The above sample code defines a common function prepare_deploy in fabfile.py. It is not difficult to see that its function is to update the latest local codebase to the version management system after the code test is executed locally for subsequent deployment with the codebase.

2) New style task based on Task class
Starting from fabric 1.1, this new-style task definition method was introduced. This method stipulates that all fab tasks must be defined as an instance or subclass of the Task class. Its biggest advantage is that it supports nested namespaces, that is, tasks can be defined in other files. After fabfile.py imports this file through import, define The tasks in this file can also be recognized and supported by fab.

In the new-style way to define the specific implementation of the task, there are two methods: a. Define a subclass inheriting from Task and implement the run () method for it; b. With the help of @task decorator. Examples are as follows:

class MyTask (Task):
    name = "deploy" ## Specify task name, it will be displayed in the output of fab --list
    def run (self, environment, domain = "whatever.com"):
        run ("git clone foo")
        sudo ("service apache2 restart")

instance = MyTask ()
The above example is equivalent to defining a task with @task:

@taskdef deploy (environment, domain = "whatever.com"):
    run ("git clone foo")
    sudo ("service apache2 restart")
The function decorated by @task inherits from the Task class by default. We can let the function inherit a custom class. For specific usage, please refer to the "Using custom subclasses with @task" section of the Defining tasks.

Special attention:

The two task definitions are mutually exclusive! Specifically, if the fabric finds a new-style definition based on the Task class in the fabfile or its imported file, then all tasks (s) defined in the classic way will be ignored by the fabric. Personally think that if you want to use fabric to achieve automated deployment of complex systems, it is best to define tasks in new-style, because this way supports nested namespace, you can use different script files to organize different tasks in layers, which is more convenient for maintenance.

Note: You can run "fab -list" to view the tasks that the fabric can recognize.

After the task is defined, how does the fabric execute? Especially when deploying multiple machines remotely, how to better manage these machines (such as roles, passwords, etc.)?

These issues will be explained in the next note.

References
[1] 48 Best Cloud Tools for Infrastructure Automation
[2] Deployment Management Tools: Chef vs. Puppet vs. Ansible vs. SaltStack vs. Fabric
[3] Fabric Doc: Overview and Tutorial
[4] Fabric Doc: Operations
[5] Fabric Doc: Context Managers
[6] Fabric Doc: Defining tasks











1. Fabric's task operation rules
According to the description of Fabric Execution model, fabric runs tasks in serial mode by default.
1) The task objects defined in the fabfile and its import file are created in sequence (only the objects are created and not actually executed), and the order of their definitions is maintained between the tasks.
2) For each task, generate a list of target machines that will run the task.
3) When fab executes tasks, it executes these tasks in the order specified by the task; for each task, it runs on the designated target machine in turn and only once.
4) The task of the unspecified target machine is run as a local task and will only be run once.

Suppose the following tasks are defined in fabfile.py:

from fabric.api import run, env

env.hosts = [‘host1’, ‘host2’]
def taskA ():
    run (‘ls‘)
def taskB ():
    run (‘whoami’)
When running fab -list in the terminal, we will see two tasks, taskA and taskB, run it:

$ fab taskA taskB1
Examples of results are as follows:

taskA executed on host1 taskA executed on host2
taskB executed on host1 taskB executed on host2
Through the above example, you should be able to understand what the fabric's default serial execution strategy is.

Fabric also allows us to specify to run tasks in parallel on multiple machines in parallel (using the multiprocessing module to execute multiple processes in parallel), and even specify certain tasks to run in parallel in the same fabfile file, while certain The task runs in the default serial mode. Specifically, you can use @parallel or @serial to specify the running mode of the task, and you can also specify whether the task is to be executed concurrently through the -P parameter on the command line. Examples are as follows:

from fabric.api import *
@parallel
def runs_in_parallel ():
    pass
def runs_serially ():
    pass
When running the following command:

fab -H host1, host2, host3 runs_in_parallel runs_serially1
Examples of execution results are as follows:

runs_in_parallel on host1, host2, and host3
runs_serially on host1
runs_serially on host2
runs_serially on host3
In addition, you can control the number of parallel processes by passing the pool_size parameter to @parallel to prevent the parallel process from dragging the machine down too much.

2. Specify the target machine for the task
There are multiple ways to specify the target machine on which the task will be run, as explained below.
1) Globally specify through env.hosts or env.roles
A series of global variables are defined in the env module of Fabric, which can be understood as environment variables that can control the fabric's behavior. Among them, env.hosts and env.roles can be used to specify the target machine list of the task globally. The default values of these two "environment variables" are both empty lists [].

The element of env.hosts is the "host strings" agreed by the fabric. Each host string is composed of three parts: [email protected]: port. The username and port parts can be defaulted. The first code example in front of this note has actually explained how to use env.hosts to specify the task target machine list globally, so I wo n’t go into details here.

env.roles is only useful if env.roledefs is configured. In many cases, different machines have different roles, such as some are the access layer, some are the business layer, and some are the data storage layer. env.roledefs can be used to organize these machine lists to reflect their roles. Examples are as follows:

from fabric.api import env

env.roledefs = {‘web’: {
                 ‘Hosts’: [‘www1’, ‘www2’, ‘www3’],
    }, ‘Db’: {
                 ‘Hosts’: [‘db1’, ‘db2’],
    }
}
@roles (‘web’)
def mytask ():
    run (‘ls / var / www’)
The above example configures two roles web and db through env.roledefs, which contains 3 and 2 machines respectively, and uses @roles to specify the target machine list for mytask.

2) Global designation through the command line

$ fab -H host1, host2 mytask1
It should be noted that the list of machines specified by the -H parameter on the command line is interpreted before the load of the fabfile script, so if env.hosts or env.roles are reconfigured in the fabfile, the list of machines specified on the command line will be overwritten. In order to prevent fabfile from overriding command line parameters, env.hosts or env.roles should be specified in fabfile with the help of list.extend (). Examples are as follows:

from fabric.api import env, run

env.hosts.extend ([‘host3’, ‘host4‘])
def mytask ():
    run (‘ls / var / www‘) 123456
At this point, when we run "fab -H host1, host2 mytask", env.hosts contains 4 machines from the command line and fabfile.

3) Specify the machine list for each task through the command line

$ fab mytask: hosts = "host1; host2" 1
The above method will overwrite the globally specified machine list to ensure that mytask will only be executed on host1 and host2.

4) Use the decorator @hosts to specify the target machine for each task

from fabric.api import env, run

env.hosts.extend ([‘host3’, ‘host4‘])
def mytask ():
    run (‘ls / var / www’)
    
    
   ## or #
my_hosts = (‘host1’, ‘host2’)
@hosts (my_hosts)
def mytask ():
    # ...


The machine list specified by the @hosts decorator for each task will override the global target machine list, but will not overwrite the target machine list specified for the task individually via the command line.

The priority rules between the above four ways to specify the target machine list for the task are summarized as follows:
1) Per-task, command-line host lists (fab mytask: host = host1) override absolutely everything else.
2) Per-task, decorator-specified host lists (@hosts (‘host1’)) override the env variables.
3) Globally specified host lists set in the fabfile (env.hosts = [‘host1’]) can override such lists set on the command-line, but only if you ’re not careful (or want them to.)
4) Globally specified host lists set on the command-line (–hosts = host1) will initialize the env variables, but that ’s it.

As of now, we can see that the fabric allows us to mix and match the several target machine designations listed above, but we need to understand whether the mixed results are as expected.

In addition, by default, fabric will deduplicate the same target machine that appears multiple times from different sources. Of course, you can turn off the default deduplication strategy by setting env.dedupe_hosts to False. You can even specify a list of machines that the task needs to skip. For specific details, please refer to the description of Fabric Execution model, which will not be repeated here.

3. When the task is executed, the password management of the target machine
If you run the above example code yourself, you will find that every time the task is executed remotely on the target machine, the fabric will ask for the login name and password of the target machine. If you want to execute tasks on multiple machines, can these password input processes be automated?

The answer is yes. There are two ways of implementation, which are described separately below.

1) Configure the login information of the target machine through env.password or env.passwords
The following example illustrates how to configure login information for multiple machines through env.passwords:

#! / bin / env python
#-*-encoding: utf-8-*-
from fabric.api import run, env, hosts
## It should be noted that the host strings here must be composed of [email protected]: port three parts, indispensable, otherwise the runtime will still ask for a password
env.passwords = {‘[email protected]: 22‘: ‘xxx’,
              ‘[Email protected]: 23’: ‘yyy’,
}

@hosts (‘10 .123.11.209 ‘, ‘10 .123.11.210’)
def host_os_type ():
    run (‘uname -a’)




        However, this way of specifying the login name / password in plain text has security problems. Therefore, the fabric also supports ssh key authentication to perform tasks without secrets on remote machines.

In the specific implementation, you need to generate the ssh public key on the target machine and configure it in the ~ / .ssh / config file, and then set env.use_ssh_config to True in the fabfile that defines the task to enable identity based on the ssh public key method Authentication in order to realize the password-free remote execution of tasks.



This article is from the "~" blog, declined to reprint!

Introduction to the automated deployment tool Fabric

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.