Text Command Data analysis
Suppose there are hundreds of rows of interface access logs that are poured out on the line, and the log is recorded in the following format:
/data1/www/logs/archives/170524/170524.v6.weibo.com_10.72.13.113.0.cn.gz:v6.weibo.com 123.125.104.20 0.016s-[24/ may/2017:14:04:37 +0800] "post/aj/video/playstatistics?ajwvr=6&cuid=2008282113&lang=zh-cn&ip= 60.255.47.150&curl=http%3a%2f%2fd.weibo.com%2f%3ftopnav%3d1%26amp%3bmod%3dlogo%26amp%3bwvr%3d6&ua= Mozilla%2f5.0%20%28windows%20nt%205.1%29%20applewebkit%2f537.36%20%28khtml%2c%20like%20gecko%29%20chrome% 2F49.0.2623.221%20SAFARI%2F537.36%20SE%202.X%20METASR%201.0&WVR=V5 http/1.1 "Http://zhaoren.weibo.com" -"sup=-subp=-" "request_id=1000659645207911167" "weibo.com Swift Framework HttpRequest class" "req_uid=2008282113"
The statistics log is based on the IP of the row weight, and statistics the same number of IP statistics, execute the command as follows:
Cat Play.log | Awk-f ' {print $} ' | Sort-k 1-n-r | Uniq-c > Rizhi.log
Description: Each line is separated by a space, the second parameter is output, sorted according to the first row,-N numeric sort-R descending, and counts the number of occurrences of each line in the text, the output is as follows:
3 223.166.87.59
1 60.12.35.5
1 1.189.96.233
This article is from the "PHP Program ape" blog, please be sure to keep this source http://okowo.blog.51cto.com/4923464/1929992
awk Log analysis under Linux