How to write a robust bash script (experience sharing) _linux Shell

Source: Internet
Author: User
Tags mkdir parent directory readable rollback touch

Shell scripts can have a very large impact when they run an exception.

This article describes some of the techniques that make bash scripts robust.

Using Set-u

How many times did the script crash because there was no initialization of the variable? For me, many times.
Chroot=$1
...
RM-RF $chroot/usr/share/doc
If the above code does not run on the parameter, it will not just remove the document from the chroot, but remove all documents from the system. So what should we do? The good news is that Bash provides set-u and lets bash automatically quit when using uninitialized variables.

You can also use Set-o nounset, which is more readable.

Copy Code code as follows:

david% bash/tmp/shrink-chroot.sh
$chroot =
david% Bash-u/tmp/shrink-chroot.sh
/tmp/shrink-chroot.sh:line 3: $1:unbound variable
david%

Using SET-E

The beginning of every script written should contain set-e. This tells Bash that if any one of the statements returns a value that is not true, it exits bash. The advantage of using-e is to avoid making the error snowball into a serious error and catching the error as soon as possible. More readable version: Set-o errexit

Use-e to liberate from check errors. If you forget to check, bash will do it for you. However, there is no way to use $ to get the command execution status because bash cannot get any return value that is not 0. You can use a different structure:

Command

If ["$?" -ne 0]; Then echo "Command failed"; Exit 1; Fi

You can replace it with:

Command | | {echo "command failed"; exit 1;}

or use:

if! Command Then echo "Command failed"; Exit 1; Fi

What if you must use a command that returns a value other than 0, or is not interested in the return value? You can use command | | True, or there is a long line of code that can temporarily turn off error checking, but I recommend that you use it sparingly.

Set +e

Command1

Command2

Set-e

The documentation indicates that bash returns the value of the last command in the pipeline by default, perhaps the unwanted one. Like performing false | True will be considered a successful execution of the command. If you want such a command to be considered an execution failure, you can use the Set-o pipefail

Program Defense-Consider unexpected things

Scripts may be run under "unexpected" accounts, such as missing files or directories not being created. Can do something to prevent these mistakes. For example, when a directory is created, the mkdir command returns an error if the parent directory does not exist. If the mkdir command is added with the-P option when the directory is created, it creates the desired parent directory before creating the desired directory. Another example is the RM command. If you want to delete a file that does not exist, it will "spit out" and the script will stop working. (Because the-e option is used, right?) You can use the-f option to solve this problem and let the script continue to work when the file does not exist.

Ready to handle spaces in file names

Some people use spaces from file names or command-line arguments, and you need to remember this when writing scripts. It takes time to remember to surround variables with quotes.

if [$filename = "Foo"];

When the $filename variable contains a space, it hangs off. You can solve this:

If ["$filename" = "foo"];

When you use $@ variables, you also need to use quotes, because two of the arguments separated by a space are interpreted as two separate parts.

Copy Code code as follows:

david% foo () {for-I in $@; doing echo $i; done}; Foo bar "Baz Quux"
Bar
Baz
Quux
david% foo () {For I in ' $@ '; do echo $i; Foo bar "Baz Quux"
Bar
Baz Quux

I didn't think of any time when I couldn't use "$@", so there was no mistake using quotes when there was a doubt.

If you use Find and Xargs at the same time, you should use-print0 to make the character split the file name instead of line breaks.

Copy Code code as follows:

david% Touch "Foo bar"
david% Find | Xargs ls
LS:./foo:no such file or directory
Ls:bar:No such file or directory
david% find-print0 | xargs-0 ls
./foo Bar

Set traps

The file system is in an unknown state when the script that is written is dead. such as lock file status, temporary file status, or update a file before updating the next file to hang up. If you can solve these problems, it is great to either delete the lock file or roll back to a known state when the script encounters a problem. Fortunately, Bash provides a way to run a command or a function when bash receives a UNIX signal. You can use the Trap command.

Trap command signal [signal ...]

Multiple signals can be linked (lists can be obtained using kill-l), but in order to clean up the mess, we use only three of them: Int,term and exit. You can use-as to get the traps back to its original state.

Signal description
Int
Interrupt-triggered when someone uses CTRL-C to terminate a script

TERM
Terminate-triggered when someone kills a script process using kill

EXIT
Exit-This is a pseudo signal that is triggered when the script exits normally or after set-e because of an error

When you use a lock file, you can write this:

Copy Code code as follows:

if [!-e $lockfile]; Then
Touch $lockfile
Critical-section
RM $lockfile
Else
echo "Critical-section is already running"
Fi

What happens when the most important part (Critical-section) is running, and if the script process is killed?
The lock file is thrown there, and the script will never run until it is deleted.

Workaround:

Copy Code code as follows:

if [!-e $lockfile]; Then
Trap "Rm-f $lockfile; Exit INT TERM Exit
Touch $lockfile
Critical-section
RM $lockfile
Trap-int TERM EXIT
Else
echo "Critical-section is already running"
Fi

Now when the process is killed, the lock file is deleted together. Note that the script is explicitly exited in the trap command, or the script will continue to execute the command behind the trap.

State conditions (Wikipedia)

In the example of the lock file above, there is a state condition that has to be pointed out, and it exists between the decision lock file and the creation of the lock file. A viable solution is to redirect to nonexistent files using IO redirection and bash's noclobber (wikipedia) schema.

Can do this:

Copy Code code as follows:

if (Set-o noclobber echo "$$" > "$lockfile") 2>/dev/null;
Then
Trap ' rm-f ' "$lockfile"; Exit $? ' INT TERM EXIT
Critical-section
Rm-f "$lockfile"
Trap-int TERM EXIT
Else
echo "Failed to acquire Lockfile: $lockfile"
Echo "held by $ (cat $lockfile)"
Fi

A more complicated problem is to update a bunch of files and make the script more elegant when they have problems with the update process. Want to make sure that those updates are correct and that they don't change at all. For example, you need a script to add users.

Copy Code code as follows:

ADD_TO_PASSWD $user
Cp-a/etc/skel/home/$user
Chown $user/home/$user-R

This script can be problematic when there is insufficient disk space or if the process is killed halfway through. In this case, you may want the user account to not exist, and his files should also be deleted.

Copy Code code as follows:


Rollback () {
DEL_FROM_PASSWD $user
If [-e/home/$user]; Then
rm-rf/home/$user
Fi
Exit
}

Trap rollback INT TERM EXIT
ADD_TO_PASSWD $user

Cp-a/etc/skel/home/$user
Chown $user/home/$user-R

Trap-int TERM EXIT

At the end of the script you need to use the trap to close the rollback call, otherwise when the script exits normally rollback will be called, then the script equals nothing.

Remain atomized

It is also necessary to update a large pile of files in the directory at once, such as the need to rewrite the URL to another site's domain name.
May write:

Copy Code code as follows:

For file in $ (find/var/www-type f-name "*.html"); Todo
Perl-pi-e ' s/www.example.net/www.example.com/' $file
Done

If you modify to half the script is a problem, part of the use of www.example.com, and the other part of the use of www.example.net. You can use Backup and trap resolution, but the site URLs during the upgrade process are inconsistent.

Workaround:

Make this change an atomic operation. Make a copy of the data first, update the URL in the copy, and replace the current working version with a copy.
You need to confirm that the replica and working version directory are on the same disk partition, so that you can take advantage of the Linux system by moving the directory simply to update the Inode node that the directory points to.

Copy Code code as follows:

Cp-a/var/www/var/www-tmp
For file in $ (find/var/www-tmp-type-f-name "*.html"); Todo
Perl-pi-e ' s/www.example.net/www.example.com/' $file
Done
Mv/var/www/var/www-old
Mv/var/www-tmp/var/www

This means that if the update process goes wrong, the online system will not be affected. The time that is affected by the online system is reduced to two MV operation time, which is very short, because the file system only updates the inode without actually replicating all the data.

Disadvantages:

Requires twice times more disk space, and those processes that open files for a long time will take a long time to upgrade to a new version of the file, and it is recommended that these processes be restarted when the update is complete.
This is not a problem for the Apache server because it will reopen the file each time.
You can use the lsof command to view files that are currently open. The advantage is that you have a previous backup that comes in handy when you need to restore.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.