For Python beginners, crawler skills should be the best to get started, but also the most able to make their own sense of accomplishment, today in the collation of the code, sorting out some of their own learning crawler code, today on the 2nd simple example, Python crawl csdn blog home all articles. Needless to say, directly on the code to explain.
Step1: Open the site you need to crawl: 79588126 "The corresponding information is the URL of the article, so it is good to do, just use a simple regular match on it, the following began to crawl."
Step3: Code is very short, directly on the code AH!
Importurllib.requestImportReurl='https://blog.csdn.net/'#Simulation BrowserHeaders= ("user-agent","mozilla/5.0 (Windows NT 10.0; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/47.0.2526.106 bidubrowser/8.7 safari/537.36") Opener=Urllib.request.build_opener () opener.addheaders=[headers]#Add HeaderUrllib.request.install_opener (opener)#Set Opner Global#get web page informationData=urllib.request.urlopen (URL). read (). Decode ('Utf-8')#set the regularpat='<a strategy=.*?href= "(. *?)" target= "_blank" >'#Match Regularres=Re.compile (PAT). FindAll (data)#Save Data forIinchRange (len (res)):Try: File="f:/csdn/"+str (i+1) +'. html'Urllib.request.urlretrieve (res[i],file)Print('Section'+str (i+1) +'article crawl Success! ') exceptUrllib.error.URLError as E:ifHasattr (E,"Code"): Print(E.code)ifHasattr (E,"reason"): Print(E.reason)
The results of the crawl are as follows:
Reptile Series (2)-----python crawl csdn blog Home All articles