Statistics view files and filter logs
1. Count the number of independent IPs in the *.log log file:
' {print $} ' Test.log | Sort | Uniq | Wc-l
2. #查询访问最多的前10个ip
' {print $} ' /access.log | sort | uniq-c | sort-nr | head-Ten
3. #查看某段时间的
" 2017:0[3-6] "
4, the most visited IP
Netstat-ntu | Tail-n +3'{print$15
Tail-n +3: Remove the first two lines. awk ' {print $} ': Low 5 fields (5th column) Cut-d:-F 1: Take IP section. Sort: Sorts the IP parts. UNIQ-C: Prints the number of occurrences of each repeating row. (and remove duplicate rows) Sort-n-r: In reverse order in which repeated rows appear. Head-n 5: Take the top 5-bit IP5, shell statistics a day Access.log logs per hour per IP access:
awk-vfs="[:]"'{gsub ("-.*", "" ", $ $), num[$2" "$1]++}end{for (i in num) print I, Num[i]}' test.log
6. Filter keywords
' 01/sep/2017:16:06:47 ' logs/access.log /opt/mongodb/log/mongodb.log. - -10t05-- ten "Dec"
Sedsed-n '/dec 10/p '/opt/mongod/log/mongod.log awkawk '/dec 10/{print $} '/opt/mongod/log/mongod.log6, specific point-in-time logs;
sedsed-N'/nov 16:24:17/p'/var/log/Secureawkawk'/nov 16:24:17/{print $}'/var/log/Securetail-NTenTest.log Log at the end of the last 10 rows of the query log; tail-N +TenTest.log queries all logs after 10 rows; head-NTenTest.log The first 10 rows of logs in the query log file; Head-N-Tentest.log query log file except for all other logs in the last 10 lines;-N Test.log |tail-n + the|head-n -Tail-N +92 indicates the log head after querying 92 rows-N -The first 20 records are found in the previous query results.
7. Find logs on specified time end
' /2014-12-17 16:17:20/,/2014-12-17 16:17:36/p ' ' 2014-12-1716:17:20'
8, use >xxx.txt to save it to a file, you can pull down this file analysis. such as:
" Awesome " >test.log
Netstat-ntu | Tail-n +3 | awk ' {print $} ' | Cut-d:-F 1 | Sort | uniq-c| Sort-n-R | Head-n 5 tail-n +3: Remove the first two lines. awk ' {print $} ': Low 5 fields (5th column) Cut-d:-F 1: Take IP section. Sort: Sorts the IP parts. UNIQ-C: Prints the number of occurrences of each repeating row. (and remove duplicate rows) Sort-n-r: In reverse order in which repeated rows appear. Head-n 5: Take the top 5-bit ip 5, the shell statistics a day Access.log log per hour per IP access: awk-vfs= "[:]" ' {gsub ("-.*", "", $ $); Num[$2 "" $1]++}end{for (i in num) print I,num[i]} ' logs/access.log 6, grep filter keyword grep ' 01/sep/2017:16:06:47 ' logs /access.log cat/opt/mongodb/log/mongodb.log.2016-12-10t05-36-42 |grep "Dec" sedsed-n '/Dec 10/p '/opt/ Mongod/log/mongod.log awkawk '/dec 10/{print $} '/opt/mongod/log/mongod.log 6, specific point-in-time log; Sedsed-n '/Nov 11 16 : 24:17/p '/var/log/secure awkawk '/nov one 16:24:17/{print $} '/var/log/secure tail-n test.log query log trailing last 10 lines Tail-n +10 test.log Query all logs after 10 rows, head-n test.log Query the header 10 rows in the log file, head-n -10 test.log query log file except for all other logs in the last 10 rows; c At-n Test.log |tail-n +92|head-n 20Tail-n +92 indicates that 92 rows after the query of the log head-n 20 means that the previous query results in the first 20 records 7, find the log of the specified time Sed-n '/2014-12-17 16:17:20/,/2014-12-17 16:17:36/p ' test.log grep ' 2014-12-17 16:17:20 ' test.log 8, using >xxx.txt to save it to a file, you can pull down this file analysis. For example: Cat-n Test.log |grep "Terrain" >xxx.txt
Linux file log filtering operations