Python: Use REQUESTS,BS4 to crawl pictures on mmjpg

Source: Internet
Author: User

This is my first reptile, choose to crawl this site is because, his URL is very regular, not because of his pictures, not because of pictures, not ...

First, his first address for each set of graphs is as follows
Http://www.mmjpg.com/mm/1

The address of the picture is as follows
Http://img.mmjpg.com/2015/1/1.jpg

There are years in the URL of the picture, because I don't know which year it is, so it's inconvenient to climb down all the pictures.

So I found the picture address of the first picture from the address of the set diagram.
Then add the picture name one at a time until the last one (you can find a total number of them from the address of the set diagram).

    # Get a total number of picture Def get_img_sum_num (self, img_url): FA = fake_useragent. UserAgent () headers = {' user-agent ': fa.random, ' Referer ': ' http://www.mmjpg.com '} requ EST = requests.get (img_url, headers=headers) soup = bs4. BeautifulSoup (request.content, ' lxml ') # Gets the value inside the label Img_sum_number = Soup.find_all (' A ', Href=re.compile ('/mm ') [8].get_text (). Strip () print img_sum_number img_sum_number = Int (img_sum_number) # print Img_su M_number return Img_sum_number # get the URL def get_img_urls (self, url) of all the pictures in the set diagram: FA = fake_useragent . UserAgent () headers = {' user-agent ': fa.random, ' Referer ': ' http://m.mmjpg.com '} reques t = requests.get (URL, headers=headers) soup = bs4.  BeautifulSoup (request.content, ' lxml ') First_img_url = Soup.find (' img '). Get (' src ') # gets the label value Url_split = First_img_url.split ('/') Img_urls = [] for I In Xrange (1, self.get_img_sum_num (URL) +1): url_split[-1] = (str (i) + '. jpg ') img_urls.append ('/'. Jo In (Url_split)) # Print Img_urls return img_urls

Download pictures based on the URL of the picture

    def down_pictures (self, img_urls):
        img_name = str (img_urls[0].split ('/') [-2]) + '-' +str (Img_urls[0].split ('/') [- 3] If
        os.path.exists (img_name):    # Check the weight if this folder exists, skip to prevent duplicate downloads
            time.sleep (1)
            print img_name+ ' presence '
            Return
        Os.mkdir (img_name) to
        Img_url in img_urls:
            fa = fake_useragent. UserAgent ()
            headers = {' user-agent ': fa.random,
                       ' Referer ': ' http://m.mmjpg.com '}
            request = Requests.get (Img_url, headers=headers) with

            open (img_name + u '/' + img_url.split ('/') [-1], ' WB ') as F:
                F.write (request.content)    # Contents Returns the union type for binary   text
                f.close ()
                print "saved" + Img_name + '/ ' + img_url.split ('/') [-1]
                Time.sleep (Random.random () *2)

Problems encountered during the climb:

1, Ban: Change header user-agent, disguised as browser browsing, if also ban can use agents, this site only need to disguise the head plus Time.sleep () can be solved (i imported fake_useragent, you can build a wheel , I am lazy, use the other people's wheel directly.

2, download the picture is the same one, all is anti-theft chain picture This problem I also looked for a long time, the results found as long as in the head Plus ' Referer ' on it

The HTTP Referer is part of the header, and when the browser sends a request to the Web server, it usually takes a referer to tell the server where I came from and the server base can get some information for processing. --Baidu Encyclopedia

All code: Https://github.com/YoungChild/mmjpg_python

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.