This is often the case in the development process, where the browser accesses an interface and gets the return data. The method we use more often is to fsockopen the interface and then issue the instruction and then accept the return value through Fgets.
However, we have found that the interface with PHP simulation is often much slower than the same interface that the browser accesses. This problem has plagued me for a long time, and today I finally found out why. I see many friends on the Internet have the same problem, share it for your reference.
We often write code like this:
while (!feof ($sHnd)) {
$line = Fgets ($sHnd, 4096);
}
Fgets gets the current 4096 (or possibly other constant) byte of the file descriptor $shnd, and returns only the newline and newline characters if no more than 4,096 bytes have been encountered with a newline character.
This is also the case in many documentation tutorials, and this code can be executed in many cases. I step-by-step tracking PHP statements time-consuming, found several times before the fgets are very fast, the most time-consuming is the last fgets, sometimes up to a few seconds, sometimes up to more than 10 seconds.
Originally this is the server's KeepAlive function caused, Apache has such a setting (Nginx and other Web servers also have): KeepAlive, if this setting is set to ON, the server does not shut down the TCP connection after a request is completed. Instead, keep the connection waiting for the browser to make a direct use of the connection the next time it initiates another request. But when Fgets gets the last piece of content and does not find a newline character, there is no end-of-file flag (feof ()), so fgets continues to wait after the content has been fetched, hoping to encounter a newline character or other content to reach 4,096 characters. So, the server and PHP are consumed, waiting for each other. After a while, the server can not afford to use, after all, the server's connection count is very valuable, when the connection is not active for several seconds, it will turn off the connection (Apache by keepalivetimeout this option to set, this value is usually 5-15). After the server shuts down the connection, the PHP side of the fgets this is lost to return to the last batch of content, the end of the access interface process.
Clear the reason for the slow to understand the solution:
The HTTP header returned by the server contains the content length attribute, and when the length of the received content is equal, we can conclude that the interface content has been acquired and no longer waiting. This is done by getting content that does not exceed the total length of the remainder (min (4096, $leftlength)) each time. The remaining total length is 0 o'clock, jumping out of the while (Feof ($xxxxx)) loop.
After such a modification, PHP through the sock way to access the interface slow problem has been basically solved, but not perfect enough, continue to optimize the speed of thinking is still on the keepalive.
We all know that the time-consuming part of the access interface is wasted on establishing a connection, and there is a lot of room for optimization if we need to call the interface frequently. Now that the server keeps the connection, if PHP also saves the connection, is it not necessary to establish a connection? The answer is yes: the first time to access the interface using Pfsockopen (Pfsockopen and fsockopen The only difference is that it establishes a long connection) function to establish a connection with the server, after the completion of the access does not shut down (fclose) connection, subsequent access to directly use this connection. Specific to the code is: first determine if there is no connection, if there is, continue to use, if not, establish a pfsockopen connection.
In addition, if the interface returns relatively short (for example, less than 50 characters), there is room for optimization, which is to remove gzip in the accept-encoding of the HTTP request header. Its role is to tell the server, I (the browser) can accept compressed content, if the server also supports gzip compression and then return, the browser gets the content after decompression and display. However, if the content is too short, the volume will increase after compression, plus the compression, decompression time, it is more than worth the candle.
After the steps above, the interface speed should be the same as the browser, in theory it will be a little bit faster.
http://blog.csdn.net/lzr77/article/details/9704405
http://blog.csdn.net/dazhi_100/article/details/46806519
The reason and optimization scheme of PHP's Fsockopen way Access interface slow