8 Linux commands that every programmer should know

Source: Internet
Author: User
ArticleDirectory
    • Cat
    • Sort
    • Grep
    • Cut
    • Sed
    • Uniq
    • Find
    • Less

Source http://www.csdn.net/article/2012-09-13/2809917-Linux-Commands-Every%20Developer-Should-Kn

Abstract:Linux has a variety of commands, some of which are difficult to use. However, after learning the eight commands mentioned above, you can process a large number of log analysis tasks, and do not need to write them in script language.ProgramTo process them.

Every programmer, at a certain point in his career, will always find that he needs to know some Linux knowledge. I am not saying that you should be a Linux expert. I mean, you should be proficient in implementing Linux Command Line tasks. In fact, I have learned the following eight commands and can basically complete any task that needs to be completed.

Note: Each of the following commands is described in a wide range of documents. This article does not detail the functions of each command. Here I want to talk about the most common usage of these most commonly used commands. If you do not know much about Linux commands and want to learn from these materials, this article will provide you with basic guidance.

Let's start from processing some data. Suppose we have two files, the Order List and the order processing result, respectively.

    1. Order. Out. Log
    2. 8:22:19 111, 1, patterns of Enterprise Architecture, kindle edition, 39.99
    3. 8:23:45 112, 1, joy of clojure, hardcover, 29.99
    4. 8:24:19 113,-1, patterns of Enterprise Architecture, kindle edition, 39.99
    5.  
    6. Order. In. Log
    7. 8:22:20 111, order complete
    8. 8:23:50 112, order sent to fulfillment
    9. 8:24:20 113, refund sent to processing
Cat

Cat-connect to the file and output the result

The cat command is very simple. You can see from the example below.

    1. Jfields $ cat order. Out. Log
    2. 8:22:19 111, 1, patterns of Enterprise Architecture, kindle edition, 39.99
    3. 8:23:45 112, 1, joy of clojure, hardcover, 29.99
    4. 8:24:19 113,-1, patterns of Enterprise Architecture, kindle edition, 39.99

As described in its description, you can use it to connect multiple files.

    1. Jfields $ cat order .*
    2. 8:22:20 111, order complete
    3. 8:23:50 112, order sent to fulfillment
    4. 8:24:20 113, refund sent to processing
    5. 8:22:19 111, 1, patterns of Enterprise Architecture, kindle edition, 39.99
    6. 8:23:45 112, 1, joy of clojure, hardcover, 29.99
    7. 8:24:19 113,-1, patterns of Enterprise Architecture, kindle edition, 39.99

If you want to view the content of these log files, you can connect them and output them to the standard output, as shown in the above example. This is useful, but the output content can be more logical.

Sort

Sort-sort the text in the file by row

In this case, the sort command is obviously your best choice.

    1. Jfields $ cat order. * | sort
    2. 8:22:19 111, 1, patterns of Enterprise Architecture, kindle edition, 39.99
    3. 8:22:20 111, order complete
    4. 8:23:45 112, 1, joy of clojure, hardcover, 29.99
    5. 8:23:50 112, order sent to fulfillment
    6. 8:24:19 113,-1, patterns of Enterprise Architecture, kindle edition, 39.99
    7. 8:24:20 113, refund sent to processing

As shown in the preceding example, the data in the file has been sorted. For some small files, you can read the entire file to process them. However, a real log file usually contains a large amount of content, so you have to consider this situation. In this case, you should consider filtering out some content and pass the content after Cat and sort to the filtering tool through pipelines.

Grep

Grep, egrep, fgrep-print the text line matching the condition

Suppose we are only interested in orders for the patterns of Enterprise Architecture book. With grep, we can limit the output of orders containing only patterns characters.

    1. Jfields $ cat order. * | sort | grep patterns
    2. 8:22:19 111, 1, patterns of Enterprise Architecture, kindle edition, 39.99
    3. 8:24:19 113,-1, patterns of Enterprise Architecture, kindle edition, 39.99

Suppose there are some problems with refund order 113, and you want to view all the relevant orders-you need to use grep again.

    1. Jfields $ cat order. * | sort | grep ": \ D 113 ,"
    2. 8:24:19 113,-1, patterns of Enterprise Architecture, kindle edition, 39.99
    3. 8:24:20 113, refund sent to processing

You will find that the matching mode on grep has something other than "113. This is because 113 can also match the bibliography or price. After adding additional characters, We can find exactly what we want.

Now we know the details of the return, we also want to know the daily sales and total refund. However, we only care about the information in patterns of enterprise architecture, and only about the quantity and price. I want to remove any information we don't care about.

Cut

Cut-delete certain areas on character lines in a file

We need to use grep to filter out the rows we want. With the row information we want, we can cut them into small sections and delete unnecessary data.

    1. Jfields $ cat order. * | sort | grep patterns
    2. 8:22:19 111, 1, patterns of Enterprise Architecture, kindle edition, 39.99
    3. 8:24:19 113,-1, patterns of Enterprise Architecture, kindle edition, 39.99
    4.  
    5. Jfields $ cat order. * | sort | grep patterns | cut-d ","-F2, 5
    6. 1, 39.99
    7. -1, 39.99

Now, we reduce the data to the desired form of calculation, and paste the data into the Excel file to get the result immediately.

Cut is used to reduce information and simplify tasks, but for output content, we usually have a more complex form. Assume that we also need to know the order ID, which can be used to associate other related information. We can use cut to obtain the ID information, but we want to put the ID at the end of the row and enclose it in single quotes.

Sed

Sed-a stream editor. It is used to perform basic text transformations on the input stream.

the following example shows how to use the SED command to transform our file lines, and then we use cut to Remove useless information.

    1. jfields $ cat order. * | sort | grep patterns \
    2. | sed S/"[0-9 \:] * \ ([0-9] * \) \, \(. * \) "/" \ 2, '\ 1' "/
    3. 1, patterns of Enterprise Architecture, kindle edition, 39.99, '123'
    4. -1, patterns of Enterprise Architecture, kindle edition, 39.99, '000000'
    5. lmp-jfields01 :~ Jfields $ cat order. * | sort | grep patterns \
    6. > | sed S/"[0-9 \:] * \ ([0-9] * \) \, \ (. * \) "/" \ 2, '\ 1' "/| cut-d", "-F1, 4,5
    7. 1, 39.99, '000000'
    8. -1, 39.99, '20140901'

We can say a few more words to the regular expression used in the example, but it is not complicated. Regular expressions do the following:

    • Delete Timestamp
    • Capture Order Number
    • Delete the comma and space after the order number
    • Capture the remaining row Information

The quotation marks and backslashes are a bit messy, but these must be used when running the command.

Once we capture the desired data, we can use \ 1 & \ 2 to store them and output them into the desired format. We also add the required single quotation marks. To ensure uniform format, we also add commas. Finally, use the cut command to delete unnecessary data.

Now we are in trouble. We have demonstrated how to reduce the log file to a more concise order form, but our finance department needs to know which books are in the order.

Uniq

Uniq-delete duplicate rows

The following example shows how to filter out book-related transactions, delete unnecessary information, and obtain a non-repeated information.

    1. Jfields $ cat order. Out. log | grep "\ (Kindle \ | hardcover \)" | cut-d ","-F3 | sort | uniq-C
    2. 1 joy of clojure
    3. 2 patterns of Enterprise Architecture

It seems that this is a simple task.

This is a good command, but the premise is that you need to find the file you want. Sometimes you will find some files hidden in deep folders, and you do not know where they are. But if you know the name of the file you are looking for, this is not a problem for you.

Find

Find-search for files in the file directory

In the above example, we processed the order. In. log and order. Out. log files. These two files are stored in my home directory. The following example shows how to find such a file in a deep directory structure.

    1. Jfields $ find/users-name "Order *"
    2. Users/jfields/order. In. Log
    3. Users/jfields/order. Out. Log

The find command has many other parameters, but I only need this one for 99% of the time.

In a simple line, you can find the file you want, and then you can view it with CAT and trim it with cut. However, files are very small. You can use pipelines to output them to the screen. However, when files are too large to exceed the screen, you may need to use pipelines to output them to less commands.

Less

Less-move forward or backward in the file

Let's go back to the simple cat | sort example. The following command outputs the merged and sorted content to the less command. In the less command, use "/" to execute the forward search, and use "?" Execute the command to search backward. A search condition is a regular expression.

    1. Jfields $ cat Order * | sort | less

If you use/113. * In the less command, the information of all 113 orders will be highlighted. You can also try ?. * 112. All timestamps related to order 112 are highlighted. Finally, you can use 'q' to exit the less command.

Linux has a variety of commands, some of which are difficult to use. However, after learning the eight commands mentioned above, you can process a large number of log analysis tasks, and do not need to write programs in the script language to process them.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.