How to Write Python programs to crawl Sina military forum?

Source: Internet
Author: User
0 reply content:
Context_re = R' (.*?)
'
You have prepared this regular expression, truncated! Disconnected
Here, you can only climb the first segment.


Three things are required to crawl the Sina military forum:

I,

On the CSDN Wang Hai teacher's column, http://blog.csdn.net/column/details/why-bug.html , Learn one.


II,

Press F12 to check the front end.


III,

From bs4 import BeautifulSoupimport requestsresponse = requests. get ("http://club.mil.news.sina.com.cn/thread-666013-1-1.html? Retcode = 0 ") # click the URL response. encoding = 'gb18030' # soup = BeautifulSoup (response. text, 'html. parser ') # construct BeautifulSoup object ps = soup ('P', 'mainbox') # Every floor for p in ps: comments = p. find_all ('P', 'cont f14') # text of each floor with open('Sina_Military_Club.txt ', 'A') as f: f. write ('\ n' + str (comments) +' \ n ')
Just a few hours ago, I was writing a small program to crawl Website member (company) information.
The specific programming question is not answered. It has nothing to do with the language used to write code. The key is to analyze the html code structure of the page and Write appropriate regular expressions for matching, if you want to simplify it, you can perform multiple matching (for example, first obtain

The first

The content in it is the address of the original post, and then further processing)
Big data analysis will not be available. Please kindly advise.

import requestsfrom bs4 import BeautifulSoupr = requests.get("http://club.mil.news.sina.com.cn/thread-666013-1-1.html")r.encoding = r.apparent_encodingsoup = BeautifulSoup(r.text)result = soup.find(attrs={"class": "cont f14"})print result.text
Use beautifulSoup. If there are too many regular expressions, it will be a headache. Use BeautifulSoup to crawl data.

#-*-Coding: UTF-8-*-import re, requestsfrom bs4 import BeautifulSoupimport sysreload (sys) sys. setdefaultencoding ('utf-8') url = "http://club.mil.news.sina.com.cn/viewthread.php? Tid = 666013 & extra = page % 3D1 & page = 1 "req = requests. get (url) req. encoding = req. apparent_encodinghtml = req. textsoup = BeautifulSoup (html) file = open('sina_club.txt ', 'w') x = 1for tag in soup. find_all ('P', attrs = {'class': "cont f14"}): word = tag. get_text () line1 = "--------------- comment" + str (x) + "---------------------" + "\ n" line2 = word + "\ n" line = line1 + line2 x + = 1 file. write (line) file. close ()
Hey, you just need to dig it. Can you tell me the number of pages of the publication number if you have published a paper? We did not do big data analysis on our own ...... We recommend that you use the regular expression test tool pyquery, which can use the same syntax as jquery. You deserve it.
Https://pythonhosted.org/pyquery/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.