Data capture instance of Sina News Detail Page

Source: Internet
Author: User
The previous article, "Python crawler: Crawling Sina News Data" explains in detail how to crawl Sina News Detail page related data, but the code construction is not conducive to subsequent expansion, each time the new detail page will need to be re-written again, so we need to organize it into functions, convenient to call directly.

The details page captures 6 data: News title, number of comments, time, source, text, editor.

First of all, we will first put the number of comments into functional form to represent:

1 Import Requests 2 import JSON 3 import re 4  5 Comments_url = ' {}&group=&compress=0&ie=utf-8&oe=utf-8 &page=1&page_size=20 ' 6  7 def getcommentscount (Newsurl): 8     ID = Re.search (' doc-i (. +). sHTML ', Newsurl) 9     NewsID = Id.group (1)     Commentsurl = Requests.get (Comments_url.format (NewsID)) One     commentstotal = Json.loads (CommentsURL.text.strip (' var data= '))     return commentstotal[' result ' [' count '] [' total ']13 news = ' ' Print (Getcommentscount (news))

Line 5th Comments_url, in the previous article, we know that there is a news ID in the comment link, the number of comments of different news is transformed by the change of the news ID, so we format it, the news ID is replaced with curly braces {};

Define the function to get the number of comments getcommentscount, through the regular to find the matching news ID, and then the obtained news link stored in the variable Commentsurl, by decoding JS to get the final comment number commentstotal;

Then we just need to enter a new news link, and we can call the function Getcommentscount directly to get the number of comments.

Finally, the 6 data that we need to crawl are sorted into a function getnewsdetail. As follows:

 1 from BS4 import beautifulsoup 2 import requests 3 from datetime import datetime 4 Import JSON 5 import re 6 7 comments _url = ' {}&group=&compress=0&ie=utf-8&oe=utf-8&page=1&page_size=20 ' 8 9 def getCommentsCount (Newsurl): Ten ID = Re.search (' doc-i (. +). sHTML ', newsurl) One NewsID = Id.group (1) Commentsurl = Requests.get (co Mments_url.format (NewsID)) commentstotal = Json.loads (CommentsURL.text.strip (' var data= ')) return Commentstota l[' result ' [' count '] [' Total ']15 # news = ' http://news.sina.com.cn/c/nd/2017-05-14/doc-ifyfeius7904403.shtml ' 17 # Print (News) Getcommentscount def getnewsdetail (news_url): result = {}21 Web_data = Requests.get (news_url) 2 2 web_data.encoding = ' utf-8 ' soup = BeautifulSoup (web_data.text, ' lxml ') result[' title ' = Soup.select (' #a Rtibodytitle ') [0].text25 result[' comments '] = Getcommentscount (news_url) time = Soup.select ('. Time-source ') [0].c Ontents[0].strip () result[' DT'] = Datetime.strptime (time, '%y year%M month%d Day%h:%m ') (result[' source ') = Soup.select ('. Time-source span span a ') [0].text29 result[' article ' = '. Join ([P.text.strip () for P in Soup.select (' #artibody p ') [: -1]]) (result[' editor ') = Soup.s Elect ('. Article-editor ') [0].text.lstrip (' Editor: ') return Result32 print (Getnewsdetail ('))

In the function Getnewsdetail, get the 6 data that needs to be crawled and put it in result:

    • result[' title ' is to get news headlines;

    • Resul[' comments ' is the number of comments that can be called directly from the comment number function that we defined at the beginning Getcommentscount;

    • result[' DT ' is the acquisition time; result[' source ' is the source of access;

    • result[' article ' is to get the body;

    • result[' editor ' is to get editor.

Then enter the news link that you want to get the data, and call the function.

Partial Run Result:

{' title ': ' Teaching Wing Chun ' at Zhejiang University, ' the coach ' is Ip Man's third disciple ', ' comments ': 618, ' DT ': Datetime.datetime (5, 7, +), ' source ': ' China News Network ', ' Arti Cle ': ' original title: Zhejiang University, the teaching Wing Chun "teacher" leaf asked ... Source: Qianjiang Evening News ', ' editor ': ' Zhang Di '}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.