Self-taught Python 9 crawler practice 2 (meitu welfare), python Crawler

Source: Internet
Author: User

Self-taught Python 9 crawler practice 2 (meitu welfare), python Crawler

As a young man with ideas, culture, and morality in the new century, in this society, I am so distressed that I am playing slowly to resist Baidu, it's okay to go online and visit YY. It's essential to look at the beautiful pictures. However, although the beautiful pictures are difficult to flip pages! Today, we are launching a crawler to take down all the beautiful pictures! There are two examples: the sister figure on the egg, and the rosi figure of a website. I am just a newbie studying python. It is not shameful to use technology. It is not guilty !!!

Egg:

Let's talk about the procedure: Get the URLs of the egg sister image, get the webpage code, extract the sister image address, access the image address, and save the image to the local device. Ready? Let's take a look at the egg sister webpage:

1 def _ getNewPage (self): 2 pageCode = self. get (self. _ Url) 3 type = sys. getfilesystemencoding () 4 pattern = re. compile (R' <div. *? Cp-pagenavi "> .*? <Span .*? Current-comment-page "> \[(.*?) \] </Span> ', re. s) 5 newPage = re. search (pattern, pageCode. decode ("UTF-8 "). encode (type) 6 print pageCode. decode ("UTF-8 "). encode (type) 7 if newPage! = None: 8 return newPage. group (1) 9 return 1500

Do not ask me why 1500 is returned if I fail... Because the egg has eaten all the pictures before 1500 pages. You can also return 0. Next is the image

1 def _ getAllPicUrl (self, pageIndex): 2 realurl = self. _ Url + "page-" + str (pageIndex) + "# comments" 3 pageCode = self. get (realurl) 4 type = sys. getfilesystemencoding () 5 pattern = re. compile ('<p>. *? <.*? View_img_link "> .*? </A> .*? ', re. s) 6 items = re. findall (pattern, pageCode. decode ("UTF-8 "). encode (type) 7 for item in items: 8 print item

Now, the image address is obtained. Next, access the image address and save the image:

1 def _ savePics (self, img_addr, folder): 2 for item in img_addr: 3 filename = item. split ('/') [-1] 4 print "Saving image:" + filename5 with open (filename, 'wb ') as file: 6 img = self. get (item) 7 file. write (img)

When you feel confident, there will be a pot of cold water pouring on your head. After all, the program is like this, testing your patience and polishing your confidence. After testing for a while, you find that you can no longer obtain the latest page number after restarting the program. Why do you think I have nothing to do. Don't worry. Let's print out the webpage code:

1 #-*-coding: UTF-8-*-2 import cookielib, urllib, urllib2, socket 3 import zlib, StringIO 4 class HttpClient: 5 _ cookie = cookielib. cookieJar () 6 _ proxy_handler = urllib2.ProxyHandler ({"http": '42. 121.6.80: 8080 '}) # Set proxy server and port 7 _ req = urllib2.build _ opener (urllib2.HTTPCookieProcessor (_ cookie) ,__ proxy_handler) # generate opener 8 _ req. addheaders = [9 ('accept', 'application/javascript, */*; q = 000000'), 10 ('user-agent', 'mozilla/0.8 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) ') 11] 12 urllib2.install _ opener (_ req) 13 14 def Get (self, url, refer = None ): 15 try: 16 req = urllib2.Request (url) 17 # req. add_header ('Accept-encoding ', 'gzip') 18 if not (refer is None): 19 req. add_header ('Referer', refer) 20 response = urllib2.urlopen (req, timeout = 120) 21 html = response. read () 22 # gzipped = response. headers. get ('content-encoding') 23 # if gzipped: 24 # html = zlib. decompress (html, 16 + zlib. MAX_WBITS) 25 return html26 failed t urllib2.HTTPError, e: 27 return e. read () 28 TB socket. timeout, e: 29 return ''30 TB socket. error, e: 31 return''

Then, you can view the image happily. However, the agent speed is slow... You can set timeout to be a little longer to prevent downloading images!

1 #-*-coding: UTF-8-*-2 import cookielib, urllib, urllib2, socket 3 import zlib, StringIO 4 class HttpClient: 5 _ cookie = cookielib. cookieJar () 6 _ proxy_handler = urllib2.ProxyHandler ({"http": '42. 121.6.80: 8080 '}) 7 _ req = urllib2.build _ opener (urllib2.HTTPCookieProcessor (_ cookie) ,__ proxy_handler) 8 _ req. addheaders = [9 ('accept', 'application/javascript, */*; q = 000000'), 10 ('user-agent', 'mozilla/0.8 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) ') 11] 12 urllib2.install _ opener (_ req) 13 14 def Get (self, url, refer = None ): 15 try: 16 req = urllib2.Request (url) 17 req. add_header ('Accept-encoding ', 'gzip') 18 if not (refer is None): 19 req. add_header ('Referer', refer) 20 response = urllib2.urlopen (req, timeout = 120) 21 html = response. read () 22 gzipped = response. headers. get ('content-encoding') 23 if gzipped: 24 html = zlib. decompress (html, 16 + zlib. MAX_WBITS) 25 return html26 failed t urllib2.HTTPError, e: 27 return e. read () 28 TB socket. timeout, e: 29 return ''30 TB socket. error, e: 31 return ''32 33 def Post (self, url, data, refer = None): 34 try: 35 # req = urllib2.Request (url, urllib. urlencode (data) 36 req = urllib2.Request (url, data) 37 if not (refer is None): 38 req. add_header ('Referer', refer) 39 return urllib2.urlopen (req, timeout = 120 ). read () 40 TB t urllib2.HTTPError, e: 41 return e. read () 42 TB socket. timeout, e: 43 return ''44 TB socket. error, e: 45 return ''46 47 def Download (self, url, file): 48 output = open (file, 'wb ') 49 output. write (urllib2.urlopen (url ). read () 50 output. close () 51 52 # def urlencode (self, data): 53 # return urllib. quote (data) 54 55 def getCookie (self, key): 56 for c in self. _ cookie: 57 if c. name = key: 58 return c. value59 return ''60 61 def setCookie (self, key, val, domain): 62 ck = cookielib. cookie (version = 0, name = key, value = val, port = None, port_specified = False, domain = domain, domain_specified = False, domain_initial_dot = False, path = '/', path_specified = True, secure = False, expires = None, discard = True, comment = None, comment_url = None, rest = {'httponly ': None}, rfc2109 = False) 63 self. _ cookie. set_cookie (ck) 64 # self. _ cookie. clear () clean cookie65 # vim: tabstop = 2 shiftwidth = 2 softtabstop = 2 expandtabHttpClient 1 #-*-coding: UTF-8-*-2 from _ future _ import unicode_literals 3 from HttpClient import HttpClient 4 import sys, re, OS 5 class JianDan (HttpClient ): 6 def _ init _ (self): 7 self. _ pageIndex = 1500 # the previous image was swallowed up 8 self. _ Url = "http://jandan.net/ooxx/" 9 self. _ floder = "jiandan" 10 def _ getAllPicUrl (self, pageIndex): 11 realurl = self. _ Url + "page-" + str (pageIndex) + "# comments" 12 pageC Ode = self. Get (realurl) 13 type = sys. getfilesystemencoding () 14 pattern = re. compile ('<p> .*? <.*? View_img_link "> .*? </A> .*? ', re. s) 15 items = re. findall (pattern, pageCode. decode ("UTF-8 "). encode (type) 16 for item in items: 17 print item18 self. _ savePics (items, self. _ floder) 19 20 def _ savePics (self, img_addr, folder): 21 for item in img_addr: 22 filename = item. split ('/') [-1] 23 print "Saving image:" + filename24 with open (filename, 'wb ') as file: 25 img = self. get (item) 26 file. write (img) 27 28 def _ getNewPage (self): 29 pageCode = Self. Get (self. _ Url) 30 type = sys. getfilesystemencoding () 31 pattern = re. compile (R' <div .*? Cp-pagenavi "> .*? <Span .*? Current-comment-page "> \[(.*?) \] </Span> ', re. s) 32 newPage = re. search (pattern, pageCode. decode ("UTF-8 "). encode (type) 33 print pageCode. decode ("UTF-8 "). encode (type) 34 if newPage! = None: 35 return newPage. group (1) 36 return 150037 38 def start (self): 39 isExists = OS. path. exists (self. _ floder) # Check whether the directory 40 print isExists41 if not isExists: 42 OS. mkdir (self. _ floder) 43 OS. chdir (self. _ floder) 44 page = int (self. _ getNewPage () 45 for I in range (self. _ pageIndex, page): 46 self. _ getAllPicUrl (I) 47 48 if _ name _ = '_ main _': 49 jd = JianDan () 50 jd. start ()JianDan

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.