"Unix/linux Network log analysis and Traffic monitoring" new book listing one months, sales ranked TOP 10, thank you for your support, there will be a heavy new book launch, 2015 Happy New Year!650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/57/C0/wKioL1SkpVfChn_nAAaBy3tWQs4478.jpg "title=" screen shot 2015-01-01 a.m. 8.44.22.png "alt=" Wkiol1skpvfchn_naaaby3twqs4478.jpg "/>This article is from t
We often need to know the server's real-time monitoring of the health of the server, such as which URLs are the largest number of visits, the server per second requests, which search engine is crawling our site? Faced with these problems, although we can go to the analysis of the statistical access log files, but it can not let us real-time statistics, can not give us intuitive statistical data. Now, apache
, Head-n 100 will display the first 100 lines
More queries
Maximum access per minute
awk ' {print $} ' Access_log |cut-c 14-18|sort|uniq-c|sort-nr|head the
highest hourly access number
awk ' {print $} ' acces S_log |cut-c 14-15|sort|uniq-c|sort-nr|head-n
Specifies the number of accesses per second of a minute
grep ' 01/nov/2013:15:59 ' access.log | cut-d ' ["- F 2 | awk ' {print $} ' | Sort | uniq-c | Sort-nr | Head-60
2. Record CPU and memory usage to
configuration file is/usr/share/logwatch/default.conf/logwatch.conf#vi/usr/share/logwatch/default.conf/logwatch.conf #修改以下几个参数MailTo = Recipient e-mail address to receiveMailfrom = who issuedDetail = low ro Med or High #系统日志分析报告的详细度: Simple, Medium, detailedMailer = "/usr/local/msmtp/bin/msmtp-t" #默认为sendmail, modify the path to MSMTP#保存退出#/usr/share/logwatch/scripts/logwatch.pl--mailto [email protected] #测试目前系统日志分析报告发送至 [email protected] mail, check whether you receive. The default system send
A process is running and constantly writing logs. You need to monitor log file updates in real time (usually used in debug). What should you do? Keep opening and closing files? No. There are at least two methods from two frequently-used commands:
Tail-F log.txt, another process is writing logs, and you can use tail to print new content in real time.
Less log.txt. If you want to monitor updates, press F. If you want to suspend
/catalina.out "'Popen=subprocess. Popen (command,stdout=subprocess. Pipe,stderr=subprocess. Pipe,shell=True) whileTrue:line=popen.stdout.readline (). Strip () Ws.send (line)Application Front End: logs.html"Init ()">"Log">"msg"Type="textbox"onkeypress="OnKey (Event)"/>"Send ()"> Send Test a piece:Conclusion:Front End--"receiver"--service sideBased on demand use case, casecoding two line parallel, service (M) Consumption (VC) separation, unit, interfac
Using the nagios plug-in to log on to the vro for ping monitoring
Router_check_apn_ping.c
#include
Script auto_ssh_route01_gglc_80_49.sh called
#!/usr/bin/expect -f#set port 22set user xxxoooset host 114.66.80.49set password xxxooo@2014set timeout 30spawn ssh $user@$hostexpect "*assword:*"send "$password\r"expect "*IRT*"send "ping -c 5 -m 1000 10.7.0.186\r"expect "*IRT*"send "quit"#expect eof
In general, the log first reflects the application of the current problem, in the mass log found in our abnormal records, and then recorded, and according to the situation alarm, we can monitor the system log, Nginx, Apache, business log. Here I take the common MySQL log to
Tags: thread use star Das Plog related detail output and When using a Linux system, consume CPU resources too high and use scripting to troubleshoot:1, real-time monitoring, once there is a high CPU consumption process, the program starts;2, then the process analysis, to draw the corresponding thread;3, the corresponding thread is located in the program log document analysis, such as Websphere Middleware ha
There is a access.log log, formatted as follows, with each line starting with an IP address:1. Demand:#1, if you access more than 200 times within the same IP address 60s, add IP to blacklist#需求分析: #1, 60 seconds to read a file #2, split, take the first element, IP address #3, add all the IP to a list, if the IP number more than 200 times, blacklistImport timePoint = 0 #文件指针While True: IPS = [] #存放所有的ip地址 Blk_set = set () #存放需要加入黑名单ip Wit
problem of SQL too long error (default 1M)Scenario Three: Directly query a large time period of data, not statistics, directly back in the library data, in the code (that is, in memory) to go back to the List, for each entity, traversing the time period, the entity to match to this time period, map (key,list), key is the segment number, The list saves the entities for that time period. The data is ultimately counted for each time period. The third scenario, which takes 1-2 seconds, proves again
1 Overview
The ELK kit (ELK stack) refers to the three-piece set of Elasticsearch, Logstash, and Kibana. These three software can form a set of log analysis and monitoring tools.
2 Environment Preparation 2.1 Firewall Configuration
In order to use HTTP services normally, you need to shut down the firewall: [plain] view plain Copy # service iptables stop
Or you can not turn off the firewall, but open the r
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.