Python crawler-Crawling beautiful pictures

Source: Internet
Author: User
When you find a site with a large number of beautiful pictures, and very want to see, how to do, look at the Web page. Every time you look, you have to load it. No, you can save these pictures locally, and then you can watch them whenever you want, haha. Don't say a lot. Here's the code:
1, import the library file:
#-*-Coding:utf-8-*-

# download Web content via requests module
import requests
#引入正则表达式模块, time module
import re
2, find the link address with beautiful pictures:
Def getpageurl ():
    page_list = []
    #进行列表页循环 for
    page in range (1600,1999):
        url= "Http://jandan.net/ooxx /page-"+str" (page) + "#comments"
        #把生成的url加入到page_list中
        page_list.append (URL) return
    page_list

3, using regular expressions, from the content of the Web page to match the link to download the image of the address:

def geturllist (URL):
    url_list=[]
    print URL

    #head是我们自己构造的一个字典, which holds the user-agent head
    = {' User-agent ': ' Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) applewebkit/537.36 (khtml, like Gecko) chrome/47.0.2526.111 safari/537.36 '}
    # HTML = requests. Get (' http://jp.tingroom.com/yuedu/yd300p/')
    html = requests.get (URL, headers = head)
    text = Html.text
    # A regular match, matching the picture
    Pic_urls = Re.findall (' </a><br/></p> ', text) for
    i in Pic_urls:
        image = i + '. jpg '
        print ' image url = ' + Image
        Url_lis T.append (image) return
    url_list

You may have noticed a sentence like this:
html = requests.get (URL, headers = head): Not only has the URL been passed in here, but also a header has been passed in, and it must be noted here that if you do not set the request header, many sites have permissions set, You can detect whether the current operation is done in the browser. Knowing this, it is much simpler. We just need to camouflage, disguised as a browser operation, then how to get the content of this header.
If you're a Chrome browser, open developer debug mode, check network, refresh the page, and you'll get something like this:


Well, the browser's parameters are got and you can cheat on it.
4, the above two method is enough to let you crawl to the picture, then how to use these two methods specifically. Look below:

if __name__ = = ' __main__ ':

    pageurl = Getpageurl () [: -1]
    #进行图片下载 for URL in

    pageurl:
        url_list = geturllist (URL)

        i = 0 for each of

        url_list:
            print ' now downloading: ' + each
            pic = requests.get (each)
            Name=str (Time.time () ) [: -3]+ "_" +re.sub ('. +?/', ', each)
            fp = open (' pic//' + name, ' WB ')
            fp.write (pic.content)
            fp.close ()
            i+=1

Well, done, you can run the next try, if the corresponding library is missing, you can first install, here the library installation method does not introduce.

Debugging environment: MAC, Pycharm
Welcome to exchange experience.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.