In the actual application, often encounter some special circumstances, such as the need for news, weather forecast, and so on, but as a personal site or the strength of small sites we can not have so much human and material resources to do these things, how to do?
Fortunately, the Internet is a resource-sharing, we can use the program to automatically pull the pages of other sites back after processing by our use.
With what, that comrade-in-arms give is not, in fact, in PHP has this function, that is, with the Curl library. Take a look at the code below!
$ch = Curl_init ("http://dailynews.sina.com.cn");
$fp = fopen ("Php_homepage.txt", "w");
curl_setopt ($ch, Curlopt_file, $fp);
curl_setopt ($ch, Curlopt_header, 0);
Curl_exec ($ch);
Curl_close ($ch);
Fclose ($FP);
?>
But sometimes there are some errors, but they have actually been downloaded! I asked the foreigner, they did not give me an answer, I think really not, just add one in front of the function, so that we just have to $txt the appropriate analysis, we can secretly crawl Sina news! However, it is not necessary for the good! In case of legal disputes, here just want to tell you that PHP function is very powerful! You can do a lot of things!
"The copyright of this article is owned by the author and house Orso near net, if need to reprint, please specify the author and source"
The above describes how to crawl other sites on the site to supplement the page, including the aspects of the content, I hope that the PHP tutorial interested in a friend helpful.