Common log statistics methods in the Shell

Source: Internet
Author: User

Before I published "Hadoop, kill chicken with sledgehammer, Python+shell to achieve the general log file query, statistics , "theneed to combine python, there may be a certain threshold, now the shell part stripping out. For example, some of the most basic methods of log statistics.

(1) View files


More Crawler.log

View Crawler.log Logs


Tail-n Crawler.log

See the last 100 lines of Crawler.log


(2) Matching statistics


Cat *.log | grep "ERROR" |wc-l

Count the number of error lines in *.log, and remove the last pipe (that is: Cat *.log | grep "ERROR") to see which lines are exactly matched, and not recommended for large files.


(3) Regular expression matching statistics


Cat *.log | grep ". *append \ (http:\/\/.*\?\) to. *"

Look at the matching regular expression in *.log. *append (http:\/\/.*\?) to. * , why do you want to add a slash before the parentheses? This is a special place in the shell, where there are other individual symbols that need a slash before the parentheses.


(4) The content of the matching regular expression is extracted, the weight, and then the statistics.

For example, a crawler log file, I want to count the number of crawled URLs, statistics URLs can not be duplicated. The format of the known log is "Append http://URL ...", the same URL may appear multiple times, using the last statistical method is the total number of URLs, not the URL is not heavy, so:


Cat * |grep "Append" |sed ' s/.*append \ (http:\/\/.*\?\) to. */\1/g ' |uniq|wc-l

Note that the first pipe grep pulls out rows that conform to the rule (containing "Append"), and the second is a well-known replacement command in the shell, using the sed ' s/regular expression/replacement content/g ', where we use the grouping (that is, the parenthesis) in the regular expression. The replacement content uses the \1 to represent the first grouping, if the second one is \2, and so on. We first find the matching row, replace the whole row with the first matching group, which is exactly the URL we need, so we get the URL. The next pipe unique is to exclude duplicate URLs, the last pipe wc-l is the number of statistics.



(5) Statistics of maximum/minimum/average

Based on the previous example, if we are extracting a number, we need to use the awk pipeline to make the maximum/minimum/average statistic, and replace the wc-l with the following:


awk ' {if (min== ") {min=max=$1}; if ($1>max) {max=$1}; if ($1< min) {min=$1}; total+=$1; count+=1} END {print Total/cou NT, MIN, max} '


(6) Group statistics

Based on the example (4), if you want to count the occurrences of each URL, the effect is similar to the MySQL group by. Replace the wc-l with:


awk ' {a[$1]++}end{for (j in a) print J "," A[j]} '

Output format: Group, number of occurrences



As long as you master the above basic usage, you can meet most of the daily statistical requirements, in which awk is basically fixed, the need to change only the regular expression part.



Common log statistics methods in the Shell

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.