Python-requests simple implementation of data capture

Source: Internet
Author: User
Tags xpath

Installation package:
Requests,lxml
The request package is used for data fetching,
lxml used for data parsing
For the content of the Web page processing, because the HTML itself is not as the database as the Structured query WYSIWYG, so the content of the Web page needs to be analyzed and then extracted, lxml is used to complete the work
The most used method in Requests is the get () method, usually you can pass the URL as a parameter in, for some of the more sophisticated web site if there is the ability to reverse crawl data is required to set the headers parameter content, the content is a dictionary type
Can be viewed in the browserUser-agent the contents of the field, set requests will submit the header information set at the same time when fetching data for browser access simulation
At the same time when fetching data to pay attention to the character encoding used by the website, when the code is different to the corresponding conversion of the character encoding
See the following code comment for details
#!/usr/bin/python
# Encoding:utf-8
Import requests
From lxmlImport etree


Url=' Http://www.chinanews.com/scroll-news/mil/2017/0110/news.shtml '

Def getnewurllist ():
Global URL
Header ={' User-agent ':' mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/40.0.2214.115 safari/537.36 '} constructs the browser header information
Response=requests.get (URL,Headers=header) Get Data
Html=response.content.decode ("GBK") decoding
Selector=etree. HTML (HTML)
Contents = Selector.xpath ('//div[@id = ' content_right ']/div[@class = ' content_list ']/ul/li[div] ') use XPath syntax to parse to get data//representation to find @ followed by corresponding HTML attribute from the root
For EachlinkIn contents:
url = Eachlink.xpath (' div/a/@href ') [0]IfSTR (Eachlink.xpath (' div/a/@href ') [0]).__contains__ ("http")Else"Http://www.chinanews.com" +eachlink.xpath (' div/a/@href ') [0]
title = Eachlink.xpath (' Div/a/text () ') [0]
Ptime = Eachlink.xpath (' div[@class = ' dd_time ']/text () ') [0]
Yield (title,url,ptime)

def getnewcontent (urllist):
For Title,url,ptimeIn Urllist:
Response=requests.get (URL,headers={' User-agent ':' mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/40.0.2214.115 safari/537.36 '})
Html=response.content.decode ("GBK")
Selector=etree. HTML (HTML)
Title=selector.xpath ("//div[@id = ' cont_1_1_2 ']/h1/text ()") [0]
Source=selector.xpath ("//div[@id = ' cont_1_1_2 ']/div[@class = ' left-time ']/div[@class = ' left-t ']/text ()") [0]
Content=selector.xpath ("//div[@id = ' cont_1_1_2 ']/div[@class = ' left_zw ']/p/text ()")

I=0
Resultcontent=‘‘
For itemInchRange0,content.__LEN__ ()):
Resultcontent+=content[i]
i+=1
Yield (title,source,resultcontent)

If __name__=="__main__":
Urllist= getnewurllist ()
result= getnewcontent (urllist)
For title,source,content in result:
print u "title:%s"%title
Print U "Source:%s"%source
print U "Body:%s"%content



Python-requests simple implementation of data fetching

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.