Create a crawler to capture beautiful pictures in Python

Source: Internet
Author: User
As a cool programmer, I usually go to the beauty map when I have nothing to do. If I am a tech man, I can open the page to see it. It's too low, so with this article, you can crawl it and read it locally! As a young man with ideas, culture, and morality in the new century, in this society, I am so distressed that I am playing slowly to resist Baidu, it's okay to go online and visit YY. It's essential to look at the beautiful pictures. However, although the beautiful pictures are difficult to flip pages! Today, we are launching a crawler to take down all the beautiful pictures! There are two examples: the sister figure on the egg, and the rosi figure of a website. I am just a newbie studying python. It is not shameful to use technology. It is not guilty !!!

Egg:

Let's talk about the procedure: Get the URLs of the egg sister image, get the webpage code, extract the sister image address, access the image address, and save the image to the local device. Ready? Let's take a look at the egg sister webpage:

We get the URL: http://jandan.net/ooxx/page-1764#comments 1764 is the page number, first we need to get the latest page number, then look forward, then get the url of the image in each page. Below we analyze the website code to write a regular expression!

Based on the method described in the previous article, write the following function getNewPage:

def __getNewPage(self):    pageCode = self.Get(self.__Url)    type = sys.getfilesystemencoding()    pattern = re.compile(r'

.*?\[(.*?)\]',re.S) newPage = re.search(pattern,pageCode.decode("UTF-8").encode(type)) print pageCode.decode("UTF-8").encode(type) if newPage != None: return newPage.group(1) return 1500

Do not ask me why 1500 is returned if I fail... Because the egg has eaten all the pictures before 1500 pages. You can also return 0. Next is the image

def __getAllPicUrl(self,pageIndex):    realurl = self.__Url + "page-" + str(pageIndex) + "#comments"    pageCode = self.Get(realurl)    type = sys.getfilesystemencoding()    pattern = re.compile('

.*?.*?.*?',re.S) items = re.findall(pattern,pageCode.decode("UTF-8").encode(type)) for item in items: print item

Now, the image address is obtained. Next, access the image address and save the image:

Def _ savePics (self, img_addr, folder): for item in img_addr: filename = item. split ('/') [-1] print "Saving image:" + filename with open (filename, 'wb ') as file: img = self. get (item) file. write (img)

When you feel confident, there will be a pot of cold water pouring on your head. After all, the program is like this, testing your patience and polishing your confidence. After testing for a while, you find that you can no longer obtain the latest page number after restarting the program. Why do you think I have nothing to do. Don't worry. Let's print out the webpage code:

As you can see, the server does not feel like you have blocked your ip address from the browser. It was a year of hard work, so we had to block it back before liberation! How can we solve this problem? A: Change the ip address to a proxy. Next we need to change our HttpClient. py to set the proxy server for opener. For specific proxy servers, please use Baidu. Keyword: http proxy. If you want to find a suitable proxy, it is not easy to try your ie Internet option one by one and test the network speed.

#-*-Coding: UTF-8-*-import cookielib, urllib, urllib2, socketimport zlib, StringIOclass HttpClient: _ cookie = cookielib. cookieJar () _ proxy_handler = urllib2.ProxyHandler ({"http": '42. 121.6.80: 8080 '}) # Set proxy server and port _ req = urllib2.build _ opener (urllib2.HTTPCookieProcessor (_ cookie) ,__ proxy_handler) # generate opener _ req. addheaders = [('accept', 'application/javascript, */*; q = 000000'), ('user-agent', 'mozilla/0.8 (compatible; MSIE 5.0; windows NT 6.1; WOW64; Trident/5.0) ')] urllib2.install _ opener (_ req) def Get (self, url, refer = None): try: req = urllib2.Request (url) # req. add_header ('Accept-encoding ', 'gzip') if not (refer is None): req. add_header ('Referer', refer) response = urllib2.urlopen (req, timeout = 120) html = response. read () # gzipped = response. headers. get ('content-encoding') # if gzipped: # html = zlib. decompress (html, 16 + zlib. MAX_WBITS) return html handle T urllib2.HTTPError, e: return e. read () blocks t socket. timeout, e: return ''socket. error, e: return''

Then, you can view the image happily. However, the agent speed is slow... You can set timeout to be a little longer to prevent downloading images!

Okay, let's move on to the next rosi article! Now it's time for the last wave of code:

# -*- coding: utf-8 -*-import cookielib, urllib, urllib2, socketimport zlib,StringIOclass HttpClient: __cookie = cookielib.CookieJar() __proxy_handler = urllib2.ProxyHandler({"http" : '42.121.6.80:8080'}) __req = urllib2.build_opener(urllib2.HTTPCookieProcessor(__cookie),__proxy_handler) __req.addheaders = [  ('Accept', 'application/javascript, */*;q=0.8'),  ('User-Agent', 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)') ] urllib2.install_opener(__req) def Get(self, url, refer=None):  try:   req = urllib2.Request(url)   req.add_header('Accept-encoding', 'gzip')   if not (refer is None):    req.add_header('Referer', refer)   response = urllib2.urlopen(req, timeout=120)   html = response.read()   gzipped = response.headers.get('Content-Encoding')   if gzipped:     html = zlib.decompress(html, 16+zlib.MAX_WBITS)   return html  except urllib2.HTTPError, e:   return e.read()  except socket.timeout, e:   return ''  except socket.error, e:   return '' def Post(self, url, data, refer=None):  try:   #req = urllib2.Request(url, urllib.urlencode(data))   req = urllib2.Request(url,data)   if not (refer is None):    req.add_header('Referer', refer)   return urllib2.urlopen(req, timeout=120).read()  except urllib2.HTTPError, e:   return e.read()  except socket.timeout, e:   return ''  except socket.error, e:   return '' def Download(self, url, file):  output = open(file, 'wb')  output.write(urllib2.urlopen(url).read())  output.close()# def urlencode(self, data):#  return urllib.quote(data) def getCookie(self, key):  for c in self.__cookie:   if c.name == key:    return c.value  return '' def setCookie(self, key, val, domain):  ck = cookielib.Cookie(version=0, name=key, value=val, port=None, port_specified=False, domain=domain, domain_specified=False, domain_initial_dot=False, path='/', path_specified=True, secure=False, expires=None, discard=True, comment=None, comment_url=None, rest={'HttpOnly': None}, rfc2109=False)  self.__cookie.set_cookie(ck)#self.__cookie.clear() clean cookie# vim : tabstop=2 shiftwidth=2 softtabstop=2 expandtabHttpClient

#-*-Coding: UTF-8-*-from _ future _ import unicode_literalsfrom HttpClient import HttpClientimport sys, re, osclass JianDan (HttpClient): def _ init _ (self): self. _ pageIndex = 1500 # The image was swallowed up by an egg. _ Url = "http://jandan.net/ooxx/" self. _ floder = "jiandan" def _ getAllPicUrl (self, pageIndex): realurl = self. _ Url + "page-" + str (pageIndex) + "# comments" pageCode = self. get (realurl) type = sys. getfilesystemencoding () pattern = re. compile ('

.*?. *?. *? ', Re. s) items = re. findall (pattern, pageCode. decode ("UTF-8 "). encode (type) for item in items: print item self. _ savePics (items, self. _ floder) def _ savePics (self, img_addr, folder): for item in img_addr: filename = item. split ('/') [-1] print "Saving image:" + filename with open (filename, 'wb ') as file: img = self. get (item) file. write (img) def _ getNewPage (self): pageCode = self. get (self. _ Url) type = sys. getfilesystemencoding () pattern = re. compile (R'

.*? \[(.*?) \] ', Re. s) newPage = re. search (pattern, pageCode. decode ("UTF-8 "). encode (type) print pageCode. decode ("UTF-8 "). encode (type) if newPage! = None: return newPage. group (1) return 1500 def start (self): isExists = OS. path. exists (self. _ floder) # Check whether the directory print isExists if not isExists: OS. mkdir (self. _ floder) OS. chdir (self. _ floder) page = int (self. _ getNewPage () for I in range (self. _ pageIndex, page): self. _ getAllPicUrl (I) if _ name _ = '_ main _': jd = JianDan () jd. start () JianDan

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.