Python Crawler Learning Note 2

Source: Internet
Author: User

Then the previous note

This time all the blog's articles are downloaded

The idea is to parse the Web page according to the URL in Dict

and download the part of the blog.

#Coding=utf-8ImportUrllib2ImportUrllib fromBs4ImportBeautifulSoupImportsysreload (SYS) sys.setdefaultencoding ('Utf-8')defQuery_item (input,tag=none,cla=None):" "The object that gets the div tag class in the corresponding URL returns the Set object P" "Soup=beautifulsoup (Input,"Html.parser")    ifcla==None:ifTag = =None:returnSoup.find_all ('Div')        Else:            returnSoup.find_all (TAG)Else:        ifTag = =None:returnSoup.find_all ('Div', class_=CLA)Else:            returnSoup.find_all (tag,class_=CLA) Req_header= {'Host':"blog.csdn.net",'user-agent':"mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/35.0.1916.153 safari/537.36",'Accept':"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",'Accept-language':"zh-cn,zh;q=0.8",'Connection':"keep-alive","Cache-control":"max-age=0","Referer":"http://blog.csdn.net"}blog_art=[]i=1#the loop is to get the maximum number of pages and put the fetched page in a list whileTrue:url="http://blog.csdn.net/zhaoyl03/article/list/"req=urllib2. Request (url+Str (i), none,req_header) result=Urllib2.urlopen (req,none) Artcle_num=query_item (Result.read (),'Div','List_item Article_item')    ifLen (artcle_num) <15:         forXinchartcle_num:blog_art.append (x) Break    Else: I+=1 forXinchartcle_num:blog_art.append (x)#now get the active page of the blog I and all the blog posts Blog_artHost_url='http://blog.csdn.net'Query_result={} forXinchBlog_art: forYinchX.find ('span','Link_title'):        #get the title of all postsQuery_result[str (Y.get_text ())]=str (Host_url+y.get ('href'))" "Query_result is the title: URL of the dictionary below according to this dictionary will be the contents of each blog post crawled out to save in the local" "a=1 Time="' forX, yinchQuery_result.items (): Temp_req=Urllib2. Request (y,none,req_header) Temp_result=Urllib2.urlopen (Temp_req,none) forIinchQuery_item (Temp_result,'Div','article_content'):        #f=open (' d:\\csdn\\%s.html '% str (X.strip ()), ' W ') #有问题 cannot write the post title as a file nameF=open ('d:\\csdn\\%s.html'A'W') F.write (" "" ") F.write (str (x) ) forJinchI:f.writelines (str (j)) F.close () a+=1

Here is the result of the crawl

Python Crawler Learning Note 2

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.