The user log information needs to be counted, when the user behavior is analyzed, the shell can often easily take out a lot of data, and then put it in Excel.
For example: The statistics log contains loadcustomprocess This address of access, by access time-consuming sort:
grep "Loadcustomprocess"/home/workflow/socket.txt | Awk-f "" ' {print $11} ' |awk-f ":" ' {print $} ' |sort-nr
The query appears "INFO", the Top 10 records appear:
grep "INFO"/usr/share/tomcat6/logs/flowplatform.log| Awk-f "" ' {print $8} ' |sort |uniq-c|sort-nr
Analytical:
grep "Loadcustomprocess"/home/workflow/socket.txt find the row containing loadcustomprocess in the file/home/workflow/socket.txt
Uniq-c: Count consecutive occurrences (note is continuous), so you need to use sort before uniq-c to have count effect in SQL
The Select field in the awk SQL statement, "-F" "" means separating each row of data by space, the resulting cell is a field, and "$7" is the seventh field
Sort the results in order to prepare for the subsequent summaries,
Sort-nr are sorted in reverse order of numeric size, equivalent to the order by in SQL ... DESC,
Head-10 take the first 10 strips
Analyze statistics logs using shell commands