Python Crawl Academy News report

Source: Internet
Author: User
Tags xpath

Python Case scrapy Crawl College News report task

Grabbed all the press enquiries from Sichuan University's Public Administration Institute (http://ggglxy.scu.edu.cn).

Experimental process

1. Determine the Fetch target.
2. Develop crawl Rules.
3. ' Write/debug ' crawl Rules.
4. Get FETCH data

1. Determine the FETCH target

The goal we need to crawl this time is all the news from Sichuan University School of public Administration. so we need to know the layout of the official website of the School of public Management.

Here we found that want to catch all the news information, not directly on the homepage of the website to crawl, need to click "more" into the general news column Inside.

We saw the specific news column, but this obviously does not meet our crawling needs: the current news page can only crawl news time, title and url, but does not crawl the content of the news. so we want to go to the news detail page to grab the details of the News.

2. Develop crawl rules

Through the first part of the analysis, we will think that if we want to crawl a piece of news specific information, we need to click on the News page to enter the news details page crawl to the specific content of the News. let's try it on a piece of news.

We found that we were able to capture the data we needed directly on the news detail page: title, time, Content. Url.

well, now we know the idea of grabbing a piece of news. but how to crawl all the news content?
This is obviously hard for us.


We can see the page Jump button at the bottom of the news column. then we could crawl all the news with the "next page" Button.

So tidy up your ideas and we can think of an obvious crawl rule:
Crawl through all the news links under ' news section ' and go to the news details link to grab all the news Content.

3. ' Write/debug ' crawl rules

In order to make the granularity of the debug crawler as small as possible, I will write and debug modules Together.
In the crawler, I will implement the following functional points:

1. Crawl out of one page of all news links under the news column
2. Crawl through a page of news links into the news details crawl the required data (mainly news Content)
3. Crawl through the loop to all the News.

The respective points of knowledge are:

1. Crawl the underlying data from a Page.
2. Crawl through the data for two Times.
3. Crawl all data by looping through the Page.

Talk not much, now open dry.

3.1 Crawling out of one page all news links under the news column

Through the analysis of the source code of the news column, we find that the structure of the captured data is

Then we only need to locate the crawler selector (li:newsinfo_box_cf), and then for the For Loop Crawl.

Writing code
import scrapyclass News2Spider(scrapy.Spider): name = "news_info_2" start_urls = [ "http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1", ] def parse(self, response): for href in response.xpath("//div[@class=‘newsinfo_box cf‘]"): url = response.urljoin(href.xpath("div[@class=‘news_c fr‘]/h3/a/@href").extract_first())

test, pass!

3.2 By crawling to a page of news links into the news details crawl the required data (mainly news Content)

Now I get a set of urls, now I need to go into each URL to grab the title, time and content I need, the code implementation is also very simple, just need to catch a URL in the original code to enter the URL and fetch the corresponding Data. so, I just need to write a crawl that goes to the news detail page, And you can use the Scapy.request Call.

Writing code
 #进入新闻详情页的抓取方法def parse_dir_contents (self, response): item = G Gglxyitem () item[ ' date ' = Response.xpath ("//div[@class = ' detail_zy_title ']/p/text () "). extract_first () item[ ' href '] = response item[ ' title '] = Response.xpath ( "//div[@class = ' Detail_zy_title ']/h1/text ()"). extract_first () data = Response.xpath ( Span class= "hljs-string" > "//div[@class = ' Detail_zy_c pb30 mb30 ']") item[0].xpath ( ' string (.) '). Extract () [0] yield item      

After integration into the original code, there are:

Import ScrapyFrom Ggglxy.itemsImport GgglxyitemClassNews2spider(scrapy. Spider): name ="news_info_2" Start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1",]DefParse(self, response):For hrefIn Response.xpath ("//div[@class = ' newsinfo_box cf ']"): url = response.urljoin (href.xpath ("div[@class = ' News_c fr ']/h3/a/@href"). extract_first ())#调用新闻抓取方法Yield Scrapy. Request (url, Callback=self.parse_dir_contents)#进入新闻详情页的抓取方法def parse_dir_contents(self, response): item = Ggglxyitem () item[' Date ') = Response.xpath ("//div[@class = ' detail_zy_title ']/p/text ()"). extract_first () item[' href '] = response item[ ' title ' = Response.xpath ("//div[@class = ' detail_zy_title ']/h1/text ()"). extract_first () data = Response.xpath ("//div[@class = ' Detail_zy_c pb30 mb30 ']") item[' content '] = data[0].xpath (' string (.)‘). Extract () [0] yield item              

test, pass!

Then we add a loop:

NEXT_PAGE_NUM = 1 NEXT_PAGE_NUM = NEXT_PAGE_NUM + 1        if NEXT_PAGE_NUM<11: next_url = ‘http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=%s‘ % NEXT_PAGE_NUM yield scrapy.Request(next_url, callback=self.parse)

Add to original Code:

Import ScrapyFrom Ggglxy.itemsImport Ggglxyitemnext_page_num =1ClassNews2spider(scrapy. Spider): name ="news_info_2" Start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1",]DefParse(self, response):For hrefIn Response.xpath ("//div[@class = ' Newsinfo_box cf ']"): URL = Response.urljoin (href.xpath ("div[@class = ' News_c fr ']/h3/a/@href"). extract_first ())Yield Scrapy. Request (URL, Callback=self.parse_dir_contents)Global Next_page_num Next_page_num = Next_page_num +1If next_page_num<11:next_url =' http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=%s '% next_page_numyield scrapy. Request (next_url, callback=self.parse) def parse_dir_contents (self, response): item = Ggglxyitem () Item[ ' Date '] = Response.xpath ( ' href '] = response Item[ ' title '] = Response.xpath ( "//div[@class = ' Detail_zy_title ']/h1/text ()"). extract_first () data = Response.xpath ( "//div[@class = ' Detail_zy_c pb30 mb30 ']") item[ ' Content ' = Data[0].xpath ( ' string (.) '). Extract () [0] yield item      

Test:


Paste_image.png

The number of catches is 191, but our crossing net found 193 news, less two.
why? We notice that log has two error:
Positioning Problems: The original found that the College news column there are two hidden two-level columns:
Like what:


Paste_image.png


The corresponding URL is


Paste_image.png


The URLs are not the same, no wonder you can't catch them!
Then we have to set the two two-level columns of the URL of the special rules, only need to add to determine whether the level two columns:

  if URL.find(‘type‘) != -1:      yield scrapy.Request(URL, callback=self.parse)

Assemble the original function:

Import scrapyfrom ggglxy.items Import ggglxyitemnext_page_num =1ClassNews2spider (Scrapy.Spider): name ="news_info_2" Start_urls = ["http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=1",]DefParse(self, response):For hrefIn Response.xpath ("//div[@class = ' Newsinfo_box cf ']"): URL = Response.urljoin (href.xpath ("div[@class = ' News_c fr ']/h3/a/@href"). extract_first ())If Url.find (' Type ')! =-1:Yield Scrapy. Request (URL, callback=Self.parse)Yield Scrapy. Request (URL, callback=Self.parse_dir_contents) global Next_page_num Next_page_num = Next_page_num +1If next_page_num<11:next_url =' http://ggglxy.scu.edu.cn/index.php?c=special&sid=1&page=%s '% next_page_numYield Scrapy. Request (next_url, callback=Self.parse) def parse_dir_contents(self, response): item = Ggglxyitem () item[' Date ' = Response.xpath ("//div[@class = ' detail_zy_title ']/p/text ()"). extract_first () item[' href '] = response item[ ' title ' = Response.xpath ("//div[@class = ' detail_zy_title ']/h1/text ()"). extract_first () data = Response.xpath ("//div[@class = ' Detail_zy_c pb30 mb30 ']") item[' content '] = data[0].xpath (' string (.)‘). Extract () [0] yield item              

Test:

4. Get FETCH data
     news_info_2 -o 0016.json



You are welcome to join the Learning Exchange Group if you encounter any problems or want to acquire learning resources in the learning Process.
626062078, we learn python! together.

Python Crawl Academy News report

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.