In the actual application, often encounter some special circumstances, such as the need for news, weather, and so on, but as a personal site or the strength of small sites we can not have so much human and financial resources to do these things, how to do?
Fortunately, the Internet is a resource-sharing, we can use the program automatically to the other site crawl back after processing by our use.
With what, the comrade to give is not, in fact, PHP has this function, that is, with Curl Library. Please look at the code below!
<?php
$ch = Curl_init ("http://dailynews.sina.com.cn");
$fp = fopen ("Php_homepage.txt", "w");
curl_setopt ($ch, Curlopt_file, $fp);
curl_setopt ($ch, Curlopt_header, 0);
Curl_exec ($ch);
Curl_close ($ch);
Fclose ($FP);
?>
But sometimes there are some errors, but in fact have been downloaded! I asked the foreigner, they did not give me an answer, I think it is not, just add a function in front; so we can secretly crawl Sina's news as long as we have a proper analysis of $txt. However, it is still not necessary for the good! To avoid legal disputes, here just want to tell you PHP's function is very powerful! You can do a lot of things!
</