The previous article, "Python crawler: Crawling Sina News Data" explains in detail how to crawl Sina News Detail page related data, but the code construction is not conducive to subsequent expansion, each time the new detail page will need to be re-written again, so we need to organize it into functions, convenient to call directly.
The details page captures 6 data: News title, number of comments, time, source, text, editor.
First of all, we will first put the number of comments into functional form to represent:
1 ImportRequests2 ImportJSON3 ImportRe4 5Comments_url ='http://comment5.news.sina.com.cn/page/info?version=1&format=js&channel=gn&newsid=comos-{}& Group=&compress=0&ie=utf-8&oe=utf-8&page=1&page_size=20'6 7 defGetcommentscount (newsurl):8ID = Re.search ('doc-i (. +). shtml', Newsurl)9NewsID = Id.group (1)TenCommentsurl =Requests.get (Comments_url.format (NewsID)) Onecommentstotal = Json.loads (CommentsURL.text.strip ('var data=')) A returncommentstotal['result']['Count'][' Total'] - -News ='http://news.sina.com.cn/c/nd/2017-05-14/doc-ifyfeius7904403.shtml' the Print(Getcommentscount (News))
Line 5th Comments_url, in the previous article, we know that there is a news ID in the comment link, the number of comments of different news is transformed by the change of the news ID, so we format it, the news ID is replaced with curly braces {};
Define the function to get the number of comments getcommentscount, through the regular to find the matching news ID, and then the obtained news link stored in the variable Commentsurl, by decoding JS to get the final comment number commentstotal;
Then we just need to enter a new news link, and we can call the function Getcommentscount directly to get the number of comments.
Finally, the 6 data that we need to crawl are sorted into a function getnewsdetail. As follows:
1 fromBs4ImportBeautifulSoup2 ImportRequests3 fromDatetimeImportdatetime4 ImportJSON5 ImportRe6 7Comments_url ='http://comment5.news.sina.com.cn/page/info?version=1&format=js&channel=gn&newsid=comos-{}& Group=&compress=0&ie=utf-8&oe=utf-8&page=1&page_size=20'8 9 defGetcommentscount (newsurl):TenID = Re.search ('doc-i (. +). shtml', Newsurl) OneNewsID = Id.group (1) ACommentsurl =Requests.get (Comments_url.format (NewsID)) -commentstotal = Json.loads (CommentsURL.text.strip ('var data=')) - returncommentstotal['result']['Count'][' Total'] the - #news = ' http://news.sina.com.cn/c/nd/2017-05-14/doc-ifyfeius7904403.shtml ' - #Print (Getcommentscount (news)) - + defGetnewsdetail (news_url): -result = {} +Web_data =requests.get (News_url) AWeb_data.encoding ='Utf-8' atSoup = BeautifulSoup (Web_data.text,'lxml') -result['title'] = Soup.select ('#artibodyTitle') [0].text -result['Comments'] =Getcommentscount (News_url) -Time = Soup.select ('. Time-source') [0].contents[0].strip () -result['DT'] = Datetime.strptime (Time,'%y year%M month%d day%h:%m') -result['Source'] = Soup.select ('. Time-source span span a') [0].text inresult['article'] =' '. Join ([P.text.strip () forPinchSoup.select ('#artibody P') [:-1]]) -result['Editor'] = Soup.select ('. Article-editor') [0].text.lstrip ('Editor:') to returnresult + - Print(Getnewsdetail ('http://news.sina.com.cn/c/nd/2017-05-14/doc-ifyfeius7904403.shtml'))
In the function Getnewsdetail, get the 6 data that needs to be crawled and put it in result:
- result[' title ' is to get news headlines;
- Resul[' comments ' is the number of comments that can be called directly from the comment number function that we defined at the beginning Getcommentscount;
- result[' DT ' is the acquisition time; result[' source ' is the source of access;
- result[' article ' is to get the body;
- result[' editor ' is to get editor.
Then enter the news link that you want to get the data, and call the function.
Partial Run Result:
{' title ': ' Teaching Wing Chun ' at Zhejiang University, ' the coach ' is Ip Man's third disciple ', ' comments ': 618, ' DT ': Datetime.datetime (5, 7, +), ' source ': ' China News Network ', ' Arti Cle ': ' original title: Zhejiang University, the teaching Wing Chun "teacher" leaf asked ... Source: Qianjiang Evening News ', ' editor ': ' Zhang Di '}
Mac,python version: 3.6,pycharm version: 2016.2
-----End-----
Du Wangdan, public number: Du Wangdan, Internet Product Manager.
Python crawler: Data fetching on Sina News Detail page (function version)