When the site is slow, we have to reflect a few reasons:
1. Execution of program code
2. Large number of database operations
3. DNS resolution of domain names
4. Server environment
I also solve the problem, the following is the solution of the steps.
1. Open a slow website to observe the situation, through the Firefox Fixfox plug-in or IE element view tool, your site loading information will be displayed in a state of the picture, and those elements loading time how many seconds and so on, how to solve the remote time-consuming JS download to local, or directly delete.
2. I looked at the next page there are many places to connect database operations, and there are remote database operations, and there are redundant database connection code, words do not say, change, solve the discovery is really fast, but still not ideal, so I put the page execution database code in the database execution is not slow.
3. About domain name DNS is only one of the cases, do not rush to find the domain name quotient of the problem, you can write a no data operation of the page in the same server domain name, to see whether the visit is also slow, if it is possible, you also want to let the people around you also see, preferably not you with the company's people.
4. Let me look at the server, is not high CPU usage caused by it.
A. Top found CPU usage is not high ah, 30% or so, but found a problem, the number of processes sleeping more. Wipe, it is best not to be a zombie process, now such things are not much.
B. View the amount of the following timewait: Mysqld and httpd are found, most of them from httpd: command Netstat-ae|grep time_wait
How to solve the problem of timewait quantity?
Time_wait Workaround:
Vi/etc/sysctl.conf
Edit the file and add the following:
Net.ipv4.tcp_syncookies = 1
Net.ipv4.tcp_tw_reuse = 1
Net.ipv4.tcp_tw_recycle = 1
Net.ipv4.tcp_fin_timeout = 30
Net.ipv4.tcp_keepalive_time = 30 Time to remain connected
Net.ipv4.tcp_max_tw_buckets = 100 This is the number of time_wait that the server is set to maintain simultaneously
Then execute/sbin/sysctl-p to let the parameters take effect.
To set the Apache configuration file:
Timeout 10 time when the client connection timed out
KeepAlive on a connection can be transmitted multiple times, so that more than one HTTP request can be delivered in a single connection
Maxkeepaliverequests 50 Sets the number of requests that can be made within a single connection
KeepAliveTimeout 15 If the server has completed a request, how long does not accept the next request will be disconnected
Save reboot Apache
After you have finished setting up the operation:
Netstat-n |awk '/^tcp/{++s[$NF]}end{for (i in S) print I,s[i]} '
You will find it very successful.
If you are not satisfied, you can set the Ulimit parameter again.
Cat >>/etc/security/limits.conf<<eof
* Soft Nofile 655350
* Hard Nofile 655350
Eof
And then ulimit-shn it to take effect.
OK, when you look again you have found that the comparison was successful.
If you find mysqld more, you can optimize the performance of MySQL: See MySQL performance tuning
Good, has done, looked at the first time_wait found a lot of Baidu robot mischief quite serious. Netstat-agn
Finally can only reluctantly, temporarily put Baidu Spider in robots.txt shielding, this is only a temporary solution.
Next can only hurriedly make the homepage made pure static, Mygod. It's over here.
This article transferred from: http://www.myhack58.com/Article/sort099/sort0100/2013/41473.htm
Web site access Slow troubleshooting methods and solutions