Python Development crawler's Dynamic Web Crawl article: Crawl blog comment data

Source: Internet
Author: User
Tags python web crawler

Take the example of a personal blog comment from the author of the book "Python Crawler: From getting started to practice." Website: http://www.santostang.com/2017/03/02/hello-world/

1) "Grab bag": Find the real data address

Right click on "Check", click "Network", select "JS". Refresh the page and check the data returned when the page is refreshed list?callback .... This JS file. Select the header on the right.

Where the Request URL is the real data address.

Scroll the mouse wheel in this state to discover User-agent.

2) Related code:

ImportRequestsImportjsonheaders={'user-agent':'mozilla/5.0 (Windows NT 6.3; Win64; x64) applewebkit/537.36 (khtml, like Gecko) chrome/63.0.3239.132 safari/537.36'}link="https://api-zero.livere.com/v1/comments/list?callback=jQuery112405600294326674093_1523687034324&limit= 10&offset=2&repseq=3871836&requestpath=%2fv1%2fcomments%2flist&consumerseq=1020&livereseq= 28583&smartloginseq=5154&_=1523687034329"R=requests.get (link,headers=headers)#gets the JSON stringJson_string =r.textjson_string= Json_string[json_string.find ('{'):-2]json_data=json.loads (json_string) comment_list=json_data['Results']['Parents'] forEachoneinchComment_list:message=eachone['content']    Print(message)

It is observed that offset in the real data address is the number of pages.

To crawl comments for all pages:

ImportRequestsImportJSONdefsingle_page_comment (link): Headers={'user-agent':'mozilla/5.0 (Windows NT 6.3; Win64; x64) applewebkit/537.36 (khtml, like Gecko) chrome/63.0.3239.132 safari/537.36'} R=requests.get (link,headers=headers)#gets the JSON stringJson_string =R.text json_string= Json_string[json_string.find ('{'):-2] Json_data=json.loads (json_string) comment_list=json_data['Results']['Parents']     forEachoneinchComment_list:message=eachone['content']        Print(message) forPageinchRange (1,4): Link1="https://api-zero.livere.com/v1/comments/list?callback=jQuery112405600294326674093_1523687034324&limit= 10&offset="Link2="&repSeq=3871836&requestPath=%2Fv1%2Fcomments%2Flist&consumerSeq=1020&livereSeq=28583& smartloginseq=5154&_=1523687034329"Page_str=STR (page) Link=link1+page_str+Link2Print(link) single_page_comment (link)

Bibliography: Thalictrum, from Python web crawler: Getting Started to practice

Python Development crawler's Dynamic Web Crawl article: Crawl blog comment data

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.