Preparation: Install requests and BEAUTIFULSOUP4. Open cmd and enter the following command
Pip Install requests
pip install BeautifulSoup4
Open the page we want to crawl, take the Sina news as an example, the address is: http://news.sina.com.cn/china/
Press F12 to open the Developer tool, click on the picture in the upper left corner, and then click the element you want to view on the page:
I clicked on the element at the news headline to see the element as Class=news-item:
Here, we want to get news of the time, title and links to view to the following locations:
Now you can write the crawler code based on the structure of the element:
Import requests from
BS4 import beautifulsoup
url = ' http://news.sina.com.cn/china/'
res = Requests.get ( URL)
# using UTF-8 encoding
res.encoding = ' UTF-8 '
# using Profiler for html.parser
soup = beautifulsoup (Res.text, ' Html.parser ')
#遍历每一个class =news-item node for
News in Soup.select ('. News-item '):
h2 = news.select (' h2 ')
#只选择长度大于0的结果
If Len (H2) > 0:
#新闻时间 time
= News.select ('. Time ') [0].text
#新闻标题
title = H2[0].text
#新闻链接
href = h2[0].select (' a ') [0][' href ']
#打印
print (time, title, href)
Run the program, as shown in the following figure: