How do I write a Python program to crawl the Sina military forum?

Source: Internet
Author: User

Reply content:

context_re = R ' (. *?)
'
This regular expression you're prepared for, truncated!. It's broken.
Here, so only the first paragraph can be climbed.


There are three things you need to do to crawl the Sina Military Forum:

First,

On the csdn Wanghai Teacher's column, / http Blog.csdn.net/column/de tails/why-bug.html , learn one.


Second,

Press F12 to see the front end.


Third,

 from BS4 Import BeautifulSoupImport RequestsResponse = Requests.Get("Http://club.mil.news.sina.com.cn/thread-666013-1-1.html?retcode=0") #硬点网址Response.encoding = ' GB18030 ' #中文编码Soup = BeautifulSoup(Response.text, ' Html.parser ') #构建BeautifulSoup对象divs = Soup(' div ', ' Mainbox ') #每个楼层 for Div inch divs:    Comments = Div.Find_all(' div ',' cont f14 ') #每个楼层的正文             with Open(' Sina_military_club.txt ',' A ')  as F:        F.Write('\ n'+Str(Comments)+'\ n')
Just a few hours ago, writing a crawl site Membership (company) information of the small program
Specific programming problems will not answer, with what language to write code-independent, the key is that you have to analyze the HTML code of the page structure, write the appropriate regular expression to match, if you want to simplify, you can make a sub-match (such as first get inside the first content is the original post address, and then further processing)
Big data analysis will not be, please enlighten.
Import Requests from BS4 Import BeautifulSoupR = Requests.Get("Http://club.mil.news.sina.com.cn/thread-666013-1-1.html")R.encoding = R.apparent_encodingSoup = BeautifulSoup(R.text)result = Soup.Find(Attrs={"Class": "Cont f14"})Print result.text
Use BeautifulSoup bar, the regular is too many to look at all headaches. Crawl data with BeautifulSoup first
#-*-Coding:utf-8-*-Import Re, Requests from BS4 Import BeautifulSoupImport SYSReload(SYS)SYS.setdefaultencoding(' Utf-8 ')URL = "Http://club.mil.news.sina.com.cn/viewthread.php?tid=666013&extra=page%3D1&page=1"req = Requests.Get(URL)req.encoding = req.apparent_encodingHTML = req.textSoup = BeautifulSoup(HTML)file = Open(' Sina_club.txt ', ' W ')x = 1 for Tag inch Soup.Find_all(' div ', Attrs = {' class ': "Cont f14"}):    Word = Tag.Get_text()    line1 = "---------------Review" + Str(x) + "---------------------" + "\ n"    line2 = Word + "\ n"     Line = line1 + line2    x += 1    file.Write( Line)file.Close()
Ah, grilled on the bar, sent paper can you tell me the number of pages let me see? We have no big data analysis ... Suggest using the Regular test tool you need pyquery, you can use the same syntax as jquery. You deserve to have.
https:// Pythonhosted.org/pyquer y/
  • Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.