I need to write an example to download an ebook to an e-book network.
e-Book Network ebook, is to make every page of the books as a picture, and then a book is a lot of pictures, I need to batch download pictures operation.
Here is the Code section:
Public functionDownload () {$http=New\org\net\http (); $url _pref= "http://www.dzkbw.com/books/rjb/dili/xc7s/"; $LOCALURL= "public/bookcover/"; $reg= "|showimg\ (' (. +) ' \); |"; $i=1; Do { $filename=substr("000".$i,-3). ". htm; $ch=Curl_init (); curl_setopt ($ch, Curlopt_url,$url _pref.$filename); curl_setopt ($ch, Curlopt_returntransfer, 1); curl_setopt ($ch, Curlopt_connecttimeout, 10); curl_setopt ($ch, Curlopt_followlocation, 1); $html= Curl_exec ($ch); Curl_close ($ch); $result=Preg_match_all($reg,$html,$out,Preg_pattern_order); if($result==1) { $PICURL=$out[1] [0]; $picFilename=substr("000".$i,-3). ". jpg; $http->curldownload ($PICURL,$LOCALURL.$picFilename); } $i=$i+1; } while($result==1); Echo"Download Complete"; }
I am here as an example of the geography of the human geography seven grade http://www.dzkbw.com/books/rjb/dili/xc7s/001.htm
The Web page starts at 001.htm, and then the number is added
Each page contains a picture, which corresponds to the content of the textbook, showing the contents of the textbook in the form of pictures
My code is to do a loop, starting from the first page to catch, have been caught in the page can not find the picture so far
Capture the contents of the Web page and crawl the images inside the page to the local server
The actual effect after crawling:
An example of writing with thinkphp: grabbing the contents of a website and saving it to a local