has been doing performance screening, the idea is based on the analysis of Nginx log, get the response time of the URL, as well as the request times, and then get the request amount of time, concurrency, analysis is the cause of concurrency, or itself is relatively slow, if the application itself, only need to find the corresponding code, and then optimize on the good
I found a few reasons, basically is back-end SQL run more, a single visit can not see, but the more people when the more slow, less than 20-200 milliseconds, more people, 200-6000 milliseconds, optimized after the basic maintenance of dozens of milliseconds, optimization strategy is to reduce unnecessary SQL, Plus caching, basically solve the problem of cotton, by the way the use of a series of commands to record down, when a summary bar
If you need to get the requested processing time, you need to add $request_time to the Nginx log, below is my Log_format
Nginx.conf
Log_format Main ' $remote _addr-$remote _user [$time _local] ' $request '
' $status $body _bytes_sent $request _body ' $http _referer '
' $http _user_agent ', ' $http _x_forwarded_for ', ' $request _time ';
After the change restart Nginx, look at the Nginx log, you can see the Nginx processing request time, this time is basically the back-end time spent, so can be based on this field to get slow response requests
Here are some of the commands I've used.
Get PV Number
$ Cat/usr/local/nginx/logs/access.log | Wc-l
Get IP number
$ Cat/usr/local/nginx/logs/access.log | awk ' {print $} ' | Sort-k1-r | Uniq | Wc-l
Get the most time-consuming request time, url, time-consuming, top 10, you can modify the following number to get more, do not add to get all
$ Cat/usr/local/class/logs/access.log | awk ' {print $4,$7, $NF} ' | Awk-f ' "' {print $1,$2,$3} ' | Sort-k3-rn | Head-10
To get the number of requests at a given moment, you can take the seconds off to get the minute data, get the minutes out of the hour, and so on.
$ Cat/usr/local/class/logs/access.log | grep 2017:13:28:55 | Wc-l
Get the number of requests per minute, output as a CSV file, and then open in Excel to generate a histogram
$ Cat/usr/local/class/logs/access.log | awk ' {print substr ($4,14,5)} ' | uniq-c | awk ' {print $ ', ' $} ' > Access.csv
The diagram above is generated in Excel, you can also use the command-line tool gnuplot to generate PNG, I also tried it, no problem, directly to the form of programming to get the report, remove the manual operation of the part, very convenient, but one thing is that the x-axis data is more, can not be as automatic dilution data like Excel , so I still like to use Excel to generate
Actually used to go is a few commands:
Cat: Input File contents
grep: Filtering text
' Sort ': sorting
' Uniq ': Go heavy
' awk ': Text processing
Command combination use, a single command can be used several times to achieve the effect of multiple filtering, the output of the previous command is the input of the latter command, streaming processing, as long as the learning of this command, there are many seemingly complex things, have become unusually simple.
Described above are all commands, and then introduce a direct output of HTML, in fact, is the use of go-access to analyze the Nginx log
Cat/usr/local/nginx/logs/access.log | Docker run--rm-i diyan/goaccess--time-format= '%h:%m:%s '--date-format= '%d/%b/%y '--log-format= ' '%H%^[%d:%t '%^] "% R "%s%b"%R "%u" ' > index.html
Go-access is in the form of Docker containers, as long as you install the Docker, you can run directly, easy to install
The above script, with the log segmentation of daily logs, and then configure the crontab inside the automatic run script, can generate every day of the Nginx report, the site of a scene clear, of course, there are disadvantages, because not real time
Want to statistics real-time data, you can use Ngxtop to view, installation is also very simple
$ pip Install Ngxtop
Run the words, advanced to the Nginx directory, and then run,-c specifies the profile,-t refresh rate, in seconds
$ cd/usr/local/nginx
$ ngxtop-c conf/nginx.conf-t 1
But this real-time approach, which also requires SSH remote logins, is not convenient, you can use LUA for real-time statistics, and then write an interface to show the data, through the lua-nginx-module,nginx/tengine can be used, If you install openresty directly, it is convenient, inline Lua, do not need to recompile nginx