0 reply content:
Context_re = R' (.*?)
'
You have prepared this regular expression, truncated! Disconnected
Here, you can only climb the first segment.
Three things are required to crawl the Sina military forum:
I,
On the CSDN Wang Hai teacher's column, http://blog.csdn.net/column/details/why-bug.html
, Learn one.
II,
Press F12 to check the front end.
III,
From bs4 import BeautifulSoupimport requestsresponse = requests. get ("http://club.mil.news.sina.com.cn/thread-666013-1-1.html? Retcode = 0 ") # click the URL response. encoding = 'gb18030' # soup = BeautifulSoup (response. text, 'html. parser ') # construct BeautifulSoup object ps = soup ('P', 'mainbox') # Every floor for p in ps: comments = p. find_all ('P', 'cont f14') # text of each floor with open('Sina_Military_Club.txt ', 'A') as f: f. write ('\ n' + str (comments) +' \ n ')
Just a few hours ago, I was writing a small program to crawl Website member (company) information.
The specific programming question is not answered. It has nothing to do with the language used to write code. The key is to analyze the html code structure of the page and Write appropriate regular expressions for matching, if you want to simplify it, you can perform multiple matching (for example, first obtain
The first
The content in it is the address of the original post, and then further processing)
Big data analysis will not be available. Please kindly advise.
import requestsfrom bs4 import BeautifulSoupr = requests.get("http://club.mil.news.sina.com.cn/thread-666013-1-1.html")r.encoding = r.apparent_encodingsoup = BeautifulSoup(r.text)result = soup.find(attrs={"class": "cont f14"})print result.text
Use beautifulSoup. If there are too many regular expressions, it will be a headache. Use BeautifulSoup to crawl data.
#-*-Coding: UTF-8-*-import re, requestsfrom bs4 import BeautifulSoupimport sysreload (sys) sys. setdefaultencoding ('utf-8') url = "http://club.mil.news.sina.com.cn/viewthread.php? Tid = 666013 & extra = page % 3D1 & page = 1 "req = requests. get (url) req. encoding = req. apparent_encodinghtml = req. textsoup = BeautifulSoup (html) file = open('sina_club.txt ', 'w') x = 1for tag in soup. find_all ('P', attrs = {'class': "cont f14"}): word = tag. get_text () line1 = "--------------- comment" + str (x) + "---------------------" + "\ n" line2 = word + "\ n" line = line1 + line2 x + = 1 file. write (line) file. close ()
Hey, you just need to dig it. Can you tell me the number of pages of the publication number if you have published a paper? We did not do big data analysis on our own ...... We recommend that you use the regular expression test tool pyquery, which can use the same syntax as jquery. You deserve it.
Https://pythonhosted.org/pyquery/