| The code is as follows |
Copy Code |
| #!/usr/bin/env python #-*-encoding=utf-8-*- Import Urllib2 Import re URL = ' http://www.qiushibaike.com/hot/page/' #first = Re.compile (R ' <div class= "content" [^>]*>.*? =</div>) ') A-Re.compile (R ' <div class= "content". *? =</div>) ') Second = Re.compile (R ' (?<=>). * ') def main (): RecCount = 5 Total = 1 IPage = 1 While True: Content = Urllib2.urlopen (URL + str (ipage)). ReadLines () Alls = ' ' For s in content: Alls + + S.strip () #print First.findall (alls) Ipage+=1 FS = First.findall (alls) Thispage = [Second.findall (S.strip ()) [0] for s in FS if S] For I, p in enumerate (thispage): Print Total, ', p Total = 1 if (i + 1)% RecCount = 0: Raw_input (' npress Key to Start Moren ') Ipage+=1 if __name__ = = ' __main__ ': Main () |
The code content is simpler, defines only one main function, and finally calls it. A few specific points of knowledge are:
FindAll () is a function in the re modulus to find all matching content
The Strip () function deletes the default whitespace (including ' n ', ' r ', ' t ', ', ', ', ', ', ', ') in the absence of arguments, because the string sequence is deleted.
The enumerate () function is more graceful and concise than a complex expression like range or list when it is necessary to traverse an index and iterate over an element for a list or array.
Note: There is a less perfect part of the Python code above that does not remove <br> from the line, but it is also easy to implement or strip implementation, relatively strip_ in PHP Tags obviously have an advantage (there is no strip_tags function in Python).
Reflection:
such as the fruit in the PHP language to achieve the words are also relatively simple, specific ideas as follows, the specific code will no longer write:
| The code is as follows |
Copy Code |
Get page Content File_get_contents ("http://www.111cn.net/"); Match all content with Preg_match_all Preg_match_all Use Strip_tags () to remove all HTML tags |
Strip_tags Note: Many hosts do not support the file_get_contents () function, which requires the use of PHP curl to get the page, if the use of Curl code will be relatively more.
Finally, the image was upgraded to handle
| The code is as follows |
Copy Code |
From Sgmllib import Sgmlparser Import Urllib2 Class SGM (Sgmlparser): def reset (self): Sgmlparser.reset (self) Self.srcs=[] Self. Istrue=true def start_div (Self,artts): For k,v in Artts: If v== "Author": Self. Istrue=false def end_div (self): Self. Istrue=true def start_img (Self,artts): For k,v in Artts: If k== "src" and self. Istrue==true: Self.srcs.append (v) def download (self): For SRC in Self.srcs: F=open (src[-12:], "WB") Print src Img=urllib2.urlopen (SRC) F.write (Img.read ()) F.close () SGM=SGM () For page in range (1,500): Url= "http://www.qiushibaike.com/late/page/%s?s=4622726"% page Data=urllib2.urlopen (URL). Read () Sgm.feed (data) Sgm.download () |