BASH Advanced (reprint anti-loss)

Source: Internet
Author: User
Tags html to text stack trace disk usage dmesg

Basis
  • Learn the basics of Bash. Specifically, enter man bash and browse at least the full text; it's simple and not long. Other shells may work well, but Bash is powerful and available in almost all cases (learning Zsh,fish or other shells is handy on your own computer, but in many cases it restricts you, such as when you need to work on a server )。

  • Learn and master at least one text-based editor. Usually Vim ( vi ) will be the best choice for you.

  • Learn how to use man commands to read documents. Learn apropos to use to find documents. Learn that some commands do not correspond to executables, but are built in bash and can be used help and help -d commands to get help information.

  • Learn to use > and < to redirect output and input, and learn | to use to redirect pipelines. Learn about standard output stdout and standard error stderr.

  • Learn the * difference between using wildcards (and perhaps counting the ... ? { } ) and references " and references.

  • Familiar with Bash task management tools: & ,ctrl-z,ctrl-c,,,, jobs fg bg kill et.

  • Understanding ssh , as well as basic no password Authentication ssh-agent , ssh-add etc.

  • Learn basic file management: ls and ls -l (understand the ls -l meaning of each column represented),, less head , tail and tail -f (even less +F ), ln and ln -s (understand the difference between hard links and soft links) chown , chmod, du (Overview of hard disk usage: du -hk * ). About the management of file systems, learning,,, df mount fdisk mkfs , lsblk .

  • Learn basic network management: ip or ifconfig , dig .

  • Familiarity with regular expressions, and grep /or egrep the effects of different parameters, such as,, -i -o -A , and -B .

  • Learn to apt-get use yum , dnf or pacman (depending on the Linux distribution you use) to find or install packages. Make sure that you have a pip Python-based command-line tool installed in your environment (some programs use it pip to install easily).

Daily use
  • In Bash, you can use the Tab Auto-completion parameter to search the command-line history using ctrl-r .

  • In Bash, use ctrl-w to delete the last word you typed, use ctrl-u to delete entire rows, use alt-b and alt-f to move by word, use Ctr L-k from the cursor to the end of the line, use ctrl-l to clear the screen. Type man readline to view the default shortcuts in Bash with a lot of content. For example, alt-. iterates forward one parameter, and alt-* expands the wildcard character.

  • If you like, you can type set -o vi in the VI-style shortcut keys.

  • Type history view command-line history. There are many abbreviations, such as !$ (the last type of argument) and !! (the last type of command), although usually replaced by ctrl-r and alt-.

  • Go back to the previous working path:cd -

  • If you change your mind when you enter a command, press alt-# to add it at the beginning of the line (treat the # command you entered as a comment) and enter. In doing so, you can then conveniently use the command line history to return to the command you just entered in half.

  • Use xargs (or parallel ). They are very good to the force. Notice that you can control the number of parameters per line ( -L ) and the maximum number of parallel ( -P ). If you're not sure if they'll work as you want, use a look at them first xargs echo . In addition, the use -I{} will be very convenient. For example:

      .  '*.py'  | Xargs grep some_function      | xargs-i{} SSH [email protected]{} hostname
    • pstree-p helps to show the process tree.

    • uses pgrep and pkill to find processes or send signals based on their names.

    • Learn about the kinds of signals you can send to a process. For example, use kill-stop [PID] to stop a process. Use Man 7 signal to view a detailed list.

    • uses nohup or disown to keep a background process running.

    • Use netstat-lntp or Ss-plat to check which processes are listening on the port (the default is to check the TCP port; Use the parameter -u Check UDP port).

    • For opening sockets and files, see lsof .

    • In the Bash script, use set-x to debug the output, use strict mode as much as possible, use set-e to make the script quit instead of continuing when an error occurs, using s Et-o Pipefail Treats errors rigorously (although the problem may be subtle). Use trap when you are involved in many scripts.

    • In a Bash script, a child shell (using parentheses (...) ) is a convenient way to organize the parameters. A common example is to move the work path temporarily, with the following code:

      # do something in current dir      (cd&& other-command)      # Continue in original dir
    • In Bash, notice that there are many forms of extension. Check if the variable exists: ${name:?error message} . For example, you can use this code when a Bash script requires a parameter input_file=${1:?usage: $0 input_file} . Mathematical expressions: i=$(( (i + 1) % 5 )) . Sequence: {1..10} . Truncate string: ${var%suffix} and ${var#prefix} . For example, suppose var=foo.pdf , then echo ${var%.pdf}.txt output foo.txt .

    • <(some command)you can treat the output as a file by using it. For example, compare local files /etc/hosts and a remote file:

      < (ssh somehost cat/etc/hosts)
    • Learn about "here documents" in Bash, such as cat <<eof ... .

    • In Bash, both standard output and standard error are redirected, some-command >logfile 2>&1 . In general, adding </dev/null is a good practice in order to ensure that the command does not have an open file handle left in the standard input that makes your current terminal unusable.

    • To view ASCII tables with hexadecimal and decimal values using man ASCII . man Unicode , man Utf-8 , and man latin1 help you to understand common encoding information.

    • Uses a screen or tmux to use multiple screens, which is especially useful when you are using SSH (saving session information). Another lightweight solution is Dtach .

    • In ssh, it is useful to know how to use -L or -D (which occasionally requires -R ) to open a tunnel, for example when you need to access it from a remote server Web.

    • may be useful for small optimizations to SSH settings, such as the ~/.ssh/config file that contains options to prevent disconnection, compression, and multichannel in a specific environment:

      TCPKeepAlive=yes      ServerAliveInterval=15      ServerAliveCountMax=6      Compression=yes      ControlMaster auto      ControlPath /tmp/%[email protected]%h:%p      ControlPersist yes
    • Some of the other options about SSH are security-sensitive and should be carefully enabled. For example, in a trusted network: StrictHostKeyChecking=no ,ForwardAgent=yes

    • Get the octal format permission for the file, using code similar to the following:

       '%A%A%n'  /etc/timezone
    • Use percol to interactively select values from another command output.

    • Use fpp (Pathpicker) to interact with files that are output based on another command (for example git ).

    • Expose all files (and subdirectories) in the current directory on the Web server to all users of your network, using: python -m SimpleHTTPServer 7777 (using ports 7777 and Python 2) or python -m http.server 7777 (using ports 7777 and Python 3).

Documentation and data processing
  • Locates a file, find . -iname ‘*something*‘ (or similar), under the current path, by file name. Find files by file name under all paths, using locate something (but remember that you updatedb may not have indexed the most recently created file).

  • Use ag to retrieve (better) in a source code or data file grep -r .

  • To convert HTML to text:lynx -dump -stdin

  • Markdown,html, and the conversion between all document formats, try pandoc .

  • If you have to deal with XML, xmlstarlet blades is not old.

  • Use the jq process JSON.

  • Excel or CSV file processing, Csvkit provides,, in2csv , csvcut csvjoin csvgrep etc. tools.

  • About Amazon S3, s3cmd It's easy and s4cmd faster. Amazon Official aws is the foundation for other AWS-related work.

  • Learn how to use sort and uniq , including Uniq -u parameters and -d parameters, as described in the following line of code section. You can also check it out comm .

  • Learn how to use cut , paste and join to change files. A lot of people will use it cut , but they won't use it join .

  • Learn how to use wc to calculate new rows ( -l ), characters (), number of words (), and number of -m -w bytes ( -c ).

  • Learn how to use to tee copy standard input to a file or even standard output, for example ls -al | tee file.txt .

  • Understand the subtle impact of the locale on many command-line tools, including sequencing and performance. Most Linux installation procedures will LANG set or other related variables to conform to local settings. Realize that when you change the locale, the results of the sorting may change. Understand that internationalization may slow down many times when sort or other commands run inefficiently. In some cases (such as set operations) you can use it with confidence export LC_ALL=C to ignore internationalization and use byte-based ordering.

  • Understanding awk and the sed use of simple processing of data. For example, all numbers in the third column of a text file are summed: awk ‘{ x += $3 } END { print x }‘ this may be three times times more than the equivalent Python code block and three times times less code.

  • Replace the string that appears in one or more files:

      's/old-string/new-string/g'  my-files-*. txt
    • Batch rename multiple files according to a pattern, using rename . For complex renaming rules, it repren may be helpful.
      # Recover Backup Files foo.bak foo:       's/\.bak$//'  *. bak      # Full rename of Filenames,directories,and contents foo, bar:      repren--full- -preserve-case.
  • Use shuf a random selection of rows from a file.

  • Understand sort the parameters. Understand how the keys work ( -t and -k ). For example, notice that you need -k1,1 to sort by only the first field, which -k1 means sorting by the whole row. Stable sorting ( sort -s ) is useful in some cases. For example, the second field is the primary keyword, and the first field is sorted for the secondary keyword, which you can use sort -k1,1 | sort -s -k2,2 . Used when processing readability numbers (for example du -h , output) sort -h .

  • If you want to write tab tabs on the Bash command line, press ctrl-v [tab] or type $‘\t‘ (the latter may be better because you can copy and paste it).

  • The standard source code comparison and merging tools are the diff and patch . Use to diffstat view the changes overview data. Notice that diff -r the entire folder is valid. Use to diff -r tree1 tree2 | diffstat view the changes overview data.

  • For binary files, use hd to make them appear in hexadecimal and use bvi to edit the binary.

  • Also for binary files, use strings (including grep etc.) allows you to find some text.

  • Binary file comparison (Delta compression), using xdelta3 .

  • Use iconv change text encoding. And more advanced usage, it can be used uconv , it supports some advanced Unicode features. For example, this command converts all vowel letters to lowercase and removes them:

      ' '  < > output.txt
    • Split files, view split (split by size) and csplit (split by mode).

    • Use zless , zmore zcat and zgrep operate on files that have been compressed.

System Commissioning
  • curlAnd curl -I can be easily applied to Web debugging, their good brothers wget can, or is more tidal httpie .

  • Use iostat , netstat , top ( htop better), and dstat go to get the state of the hard disk, CPU, and network. Mastering these tools will give you a quick overview of the current state of the system.

  • To have a deep overall understanding of the system, use glances . It provides you with some system-level data in a terminal window. This can be very helpful for quickly checking each subsystem.

  • To understand the memory state, run and free understand vmstat the output. Pay particular attention to the value of "cached", which refers to the amount of memory used by the Linux kernel as a file cache, so it is independent of free memory.

  • Java system debugging is a very different thing, a small trick that can be used for debugging on Oracle's JVM or other JVM is that you can run kill -3 <pid> both a full stack trajectory and a heap overview (including GC details) that will be saved to standard output/log files.

  • Use mtr the Go-to-trace route to determine network problems.

  • Use ncdu it to view disk usage, which is more time-saving than commonly used commands, such as du -sh * .

  • Find the socket connection or process that is using bandwidth, using the iftop or nethogs .

  • abTools (bundled with Apache) can simply and rudely check the performance of your Web server. For more complex load tests, use the siege .

  • wireshark, tshark and ngrep can be used for complex network debugging.

  • Understand strace and ltrace . These two tools are very useful when your program fails, hangs, or crashes, and you don't know why or you want to have a general understanding of performance. Note the profile parameter ( -c ) and the process parameter () that is attached to a run -p .

  • Learn about using ldd to check shared libraries.

  • Learn how to gdb connect to a running process and get a stack trace of it.

  • Learn to use /proc . It sometimes works surprisingly well when debugging problems that are occurring. For example:,,, /proc/cpuinfo /proc/xxx/cwd /proc/xxx/exe /proc/xxx/fd/ , /proc/xxx/smaps .

  • This is useful when debugging some of the problems that occurred before sar . It shows historical data such as CPU, memory, and network.

  • For deeper system analysis and performance analysis, see stap (SYSTEMTAP), perf as well sysdig .

  • View the Linux distributions you are currently using (most distributions are valid):lsb_release -a

  • Try something when it's fun to work with dmesg (it could be a hardware or drive problem).

One line of code

Some examples of command combinations:

    • When you need to do a collection of text files, and, and differential operation, combined with sort / uniq very helpful. Assume a b that the file is different from the two contents. This is very efficient and can be used on small files and on files on G ( sort not constrained by memory size, although /tmp you may need parameters when on a small root partition -T ), see the section on and parameters in the previous article LC_ALL sort -u .
      | | > C   # C is a union b      | | > C   # C is a intersect b      | | > C   # is set difference A-B
    • Use grep . * to read the contents of all files under the check directory, such as checking a directory full of configuration files such as /sys , /proc /etc .

    • Calculate the number of the third column in the text file and (probably three times times faster than the equivalent Python code and three times times less code):

       '{x + = $ $} END {print x}'  myfile
    • If you want to see the size \ date on the file tree, this may look like a recursive version but easier to ls -l ls -lR understand than:
      . -type F-ls
    • Use or as much as possible xargs parallel . Notice that you can control the number of parameters per line ( -L ) and the maximum number of parallel ( -P ). If you're not sure if they'll work as you want, use a look at them first xargs echo . In addition, the use -I{} will be very convenient. For example:
      .  '*.py'  | Xargs grep some_function      | xargs-i{} SSH [email protected]{} hostname
    • Suppose you have a text file that resembles a Web server log file, and a certain value appears only on some lines, assuming a acct_id parameter is in the URI. If you want to figure out acct_id how many times each value is requested, use the following code:
      |  'acct_id=[0-9]+'  | | | | Sort-rn
    • Run this function to get a random tip from this document (parse the Markdown file and extract the project):
      function Taocl () {        |          |          |           (Html/body/ul/li[count (P) >0]) [$RANDOM mod Last () +1]"  |          | Fmt-80      }
Unpopular but useful
  • expr: Evaluates an expression or regular match

  • m4: Simple Macro Processor

  • yes: Print strings multiple times

  • cal: Beautiful Calendar

  • env: Executes a command (useful in a script file)

  • printenv: Print environment variables (useful when debugging or when using script files)

  • look: Find words that begin with a specific string

  • cut, paste and join : Data modification

  • fmt: Formatting text paragraphs

  • pr: Formatting text into a page/column form

  • fold: Several lines in the wrapped text

  • column: Formatting text into multiple columns or tables

  • expandand unexpand : Converting between tabs and spaces

  • nl: Add line number

  • seq: Printing numbers

  • bc: Calculator

  • factor: Decomposition factor

  • gpg: Encrypt and sign files

  • toe: Terminfo Entries List

  • nc: Network debugging and data transmission

  • socat: Socket broker, with netcat similar

  • slurm: Network Visualization

  • dd: Transferring data between files or devices

  • file: Determine file type

  • tree: Displays the path and file as a tree, similar to the recursivels

  • stat: File information

  • tac: Reverse Output file

  • shuf: Select a few lines randomly in the file

  • comm: A row of rows comparing a sorted file

  • pv: Monitor data through the pipeline

  • hdand bvi : Save or edit a binary file

  • strings: Extracting text from a binary file

  • tr: Convert Letters

  • iconvor uconv : Easy file encoding

  • splitand csplit : Splitting files

  • units: Converting one unit of measure to another equivalent unit of measure (see /usr/share/units/definitions.units )

  • 7z: High proportion of file compression

  • ldd: Dynamic Library Information

  • nm: Extracting symbols from the obj file

  • ab: Performance Analysis Web server

  • strace: System Invoke debugging

  • mtr: Better Network Debug Tracking tool

  • cssh: Visual concurrency Shell

  • rsync: Synchronizing files and folders via SSH

  • wiresharkand tshark : Grab bag and network debugging tools

  • ngrep: grep on the network layer

  • hostand dig : DNS Lookup

  • lsof: Lists tools to open files for the current system and view port information

  • dstat: System Status View

  • glances: High-level Multi-subsystem Overview

  • iostat: CPU and hard disk status

  • htop: Top's enhanced version

  • last: Log in Log

  • w: View the user who is logged on

  • id: User/Group ID information

  • sar: System History data

  • iftopor nethogs : Network utilization of sockets and processes

  • ss: Socket Data

  • dmesg: Boot and system error messages

  • hdparm: Sata/ata disk changes and performance analysis

  • lsb_release: Linux Distribution Information

  • lsblk: List block Device information: Displays your disk and disk partition information in a tree form

  • lshw, lscpu , lspci , lsusb and dmidecode : View hardware information including CPU, BIOS, RAID, graphics card, USB device, etc.

  • fortune, ddate and sl : Well, it depends largely on whether you think the steam train and the inexplicable famous sayings are "useful"

BASH Advanced (reprint anti-loss)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.