Use Python to grab "fried egg net" above the beautiful pictures, scale is very big Oh! Ha ha

Source: Internet
Author: User
Tags python web crawler

Don't say much nonsense, first on the code:

Importurllib.requestImportRe#gets the number of pages in the current page page_namedefget_pagenum (URL): Req=urllib.request.Request (URL) req.add_header ('user-agent','mozilla/5.0 (Windows NT 6.3; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/50.0.2661.102 safari/537.36') Res=Urllib.request.urlopen (req) HTML= Res.read (). Decode ('Utf-8') P= R'<span class= "current-comment-page" >[^ "]+</span>'Temp=Re.search (p,html) page_num= Temp.group () [36:39]    returnPage_num#write the pictures on this page into our mm folderdefget_img (page_url): Req=urllib.request.Request (Page_url) Req.add_header ('user-agent','mozilla/5.0 (Windows NT 6.3; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/50.0.2661.102 safari/537.36') Res=Urllib.request.urlopen (req) HTML= Res.read (). Decode ('Utf-8') P= R' "'url_list=re.findall (p,html) Num=0 foreachinchUrl_list:file= Open ('c:/users/lenovo/desktop/mm/'+each[-8:]+'. jpg','WB')        ifEach[0:5] = ='http:': Res=Urllib.request.urlopen (each)Else: Res= Urllib.request.urlopen ('http:'+Each ) File.write (Res.read ()) File.close ()#can only be run directlyif __name__=='__main__': URL='http://jandan.net/ooxx/'Page_num=get_pagenum (URL) forIinchRange (10):#grabbed 10 pictures of beauty on the pagePage_url = URL +'page-'+str (Page_num) +'#comments'get_img (page_url) page_num= Int (page_num)-1#The following is the time to write the regular and easy to see so pasted over the connection##http://jandan.net/ooxx/page-143#comments#<span class= "Current-comment-page" >[141]</span>#http://wx3.sinaimg.cn/mw600/661eb95cly1fgioxk7mk3j20xc1e01f1.jpg#

The results are as follows:

Because of time, I only grabbed the "fried egg net" on the 10 pages of beautiful pictures, we can change the number of cycles, you can crawl a lot, here I only grabbed 250 pictures, a total of 51.2kb, haha, can be very good to appreciate the beauty of the picture, look at the bleeding nose ...

Of course, this program is not perfect, but I am a beginner of Python web crawler Small works, and then some more perfect. The next period of time really have to prepare for the final exams and postgraduate examinations, refueling!

Reference Source: https://zhuanlan.zhihu.com/p/26442105

Note: Please do not reprint it without my permission! Thank you.

Use Python to grab "fried egg net" above the beautiful pictures, scale is very big Oh! Ha ha

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.