Reply content:
context_re = R ' (. *?)
'
This regular expression you're prepared for, truncated!. It's broken.
Here, so only the first paragraph can be climbed.
There are three things you need to do to crawl the Sina Military Forum:
First,
On the csdn Wanghai Teacher's column, / http Blog.csdn.net/column/de tails/why-bug.html
, learn one.
Second,
Press F12 to see the front end.
Third,
from BS4 Import BeautifulSoupImport RequestsResponse = Requests.Get("Http://club.mil.news.sina.com.cn/thread-666013-1-1.html?retcode=0") #硬点网址Response.encoding = ' GB18030 ' #中文编码Soup = BeautifulSoup(Response.text, ' Html.parser ') #构建BeautifulSoup对象divs = Soup(' div ', ' Mainbox ') #每个楼层 for Div inch divs: Comments = Div.Find_all(' div ',' cont f14 ') #每个楼层的正文 with Open(' Sina_military_club.txt ',' A ') as F: F.Write('\ n'+Str(Comments)+'\ n')
Just a few hours ago, writing a crawl site Membership (company) information of the small program
Specific programming problems will not answer, with what language to write code-independent, the key is that you have to analyze the HTML code of the page structure, write the appropriate regular expression to match, if you want to simplify, you can make a sub-match (such as first get inside the first content is the original post address, and then further processing)
Big data analysis will not be, please enlighten.
Import Requests from BS4 Import BeautifulSoupR = Requests.Get("Http://club.mil.news.sina.com.cn/thread-666013-1-1.html")R.encoding = R.apparent_encodingSoup = BeautifulSoup(R.text)result = Soup.Find(Attrs={"Class": "Cont f14"})Print result.text
Use BeautifulSoup bar, the regular is too many to look at all headaches. Crawl data with BeautifulSoup first
#-*-Coding:utf-8-*-Import Re, Requests from BS4 Import BeautifulSoupImport SYSReload(SYS)SYS.setdefaultencoding(' Utf-8 ')URL = "Http://club.mil.news.sina.com.cn/viewthread.php?tid=666013&extra=page%3D1&page=1"req = Requests.Get(URL)req.encoding = req.apparent_encodingHTML = req.textSoup = BeautifulSoup(HTML)file = Open(' Sina_club.txt ', ' W ')x = 1 for Tag inch Soup.Find_all(' div ', Attrs = {' class ': "Cont f14"}): Word = Tag.Get_text() line1 = "---------------Review" + Str(x) + "---------------------" + "\ n" line2 = Word + "\ n" Line = line1 + line2 x += 1 file.Write( Line)file.Close()
Ah, grilled on the bar, sent paper can you tell me the number of pages let me see? We have no big data analysis ... Suggest using the Regular test tool you need pyquery, you can use the same syntax as jquery. You deserve to have.
https:// Pythonhosted.org/pyquer y/