For the first time, Python is really powerful, and the routed line of code crawls the content and stores it as a txt text
Directly on the code
#coding = ' Utf-8 'ImportRequests fromBs4ImportBeautifulSoupImportsysreload (SYS) sys.setdefaultencoding ("Utf-8")#Crawling Web pagesURL ="http://news.sina.com.cn/china/"Res=requests.get (URL) res.encoding='Utf-8'#put it in the soup and analyze the Web content .Soup = BeautifulSoup (Res.text,"Html.parser") Elements= Soup.select ('. News-item') #grab what you need and put it in a file#crawl content with time, content text, and links to contentFName ="F:/asdf666.txt"Try: F= Open (fname,'W') forElementinchelements:ifLen (Element.select ('H2')) >0:f.write (Element.select ('. Time') [0].text] F.write (Element.select ('H2') [0].text] F.write (Element.select ('a') [0]['href']) F.write ('\ n') F.close ()exceptException, E:PrinteElse: Passfinally: Pass
Because this is the first time to do the small reptile, the function is very simple and very single, is to crawl the news page part of the News
Then crawl the news time and hyperlinks
Then integrate them in the order of the news and put them into a text file to store them.
Using BeautifulSoup to crawl the content of Sina Web News