Web crawler The first station, I refer to some information, wrote the first simple web crawler, though small, spite.
The crawler's function is, according to the input of the image keyword and the number of pictures, from the Baidu picture to download a set number of corresponding theme pictures, and save to the local corresponding folder. The development process involves the knowledge of the web crawler, mainly in the following aspects, one is the basic knowledge of Python, the second is the use of Python Urllib2 Library, the third is the actual crawl in the process of the various network errors and solutions, four is the Chinese garbled in Windows under the problem, Five is Python file operation knowledge.
Complete the code first, followed by a brief description:
#-*-Coding:utf-8-*-"" "Created on Thu Aug-19:50:42 @author: Administrator" "" Import re import os import URL LIB2 Import cookielib Import sys class bdimg:baseurl = "HTTP://IMAGE.BAIDU.COM/SEARCH/INDEX?TN=BAIDUIMAGE&CL=2&A Mp;lm=-1&st=-1&sf=1&ic=0&nc=1&se=1&showtab=0&fb=0&face=0&istype=2&ie= Utf-8 "Maxnum = ten headers = {' user-agent ': ' mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/45.0.2454.101 safari/537.36 '} #初始化 def __init__ (Self,keyword, Maxnum): Self.baseurl = self.baseurl+ "&word=" +keyword Self.keyword = KeyWord Self.maxnum = max Num #获取网页源代码 def getsourcecode (self,offset): Request1 = Urllib2. Request (self.baseurl+ "&pn=" +str (offset)) response = Urllib2.urlopen (Request1) return Response.read (). D Ecode ("Utf-8") #根据图片链接, get picture data def getfiledata (Self,url): Try:cj=cookielib.
Lwpcookiejar () Opener=urllib2.build_opener (URLLIB2. Httpcookieprocessor (CJ)) Urllib2.install_opener (opener) Req=urllib2.
Request (url = url,headers = self.headers) operate=opener.open (req) Data=operate.read () Return data except Baseexception, E:print e-return None #按关键字查找并保存图片数据 def
Search (self): i=0 Pageoffset = 1 #创建关键字对应文件夹 if (False = = os.path.exists (Self.keyword)):
Os.makedirs (Self.keyword) while (i<self.maxnum): Source = Self.getsourcecode (Pageoffset) Pattern = Re.compile (' "Objurl": "(. *?)", ', re. S) items = Re.findall (Pattern,source) if 0>=len (items): Break for I Tem in Items:data = Self.getfiledata (item) if (none!=data): fp = open (
Self.keyword+ "/" +str (i) + ". jpg", ' WB ') fp.write (data) Fp.flush () fp.close () print item, "", str (i)
I+=1 if (i>=self.maxnum): Break Pageoffset+=len (items) reload (SYS)
Sys.setdefaultencoding (' UTF8 ') key = str (raw_input (U "Please enter the keyword for the image search:"). Decode ("GBK") num = Int (raw_input (u "Enter picture quantity:"))
bdimg = bdimg (key,num) Bdimg.search ()
Need to say is, Baidu image search link, I used two key fields, one is word, it represents the keyword of the image search, the other is PN, it represents the current page shows the first picture position offset. Because the Baidu picture does not have the page number this parameter, it is through the Drop-down automatically refreshes realizes, the experiment discovery may change the PN value to reach the page effect.