Share a crawler code written in Python that crawls and replies

Source: Internet
Author: User
Tags in python

When I found a "how to correctly vomit" favorites, some of the god's replies in it were really funny, but it was a little troublesome to read one page at a time, in addition, every time I open a webpage, I want to see if it looks nice if I crawl all the pages into a file and can see all the pages at any time, so I started to do it.

Tools

1. Python 2.7

2. BeautifulSoup

Analyze web pages

Let's take a look at the situation on this webpage:

URL:

It is easy to see that the website is regular, and the page is slowly increasing, so that all the web pages can be crawled.

Let's take a look at the content we want to crawl:

 

We need to crawl two pieces of content: question and answer. The answer is limited to the answer that shows all the content. For example, the answer below cannot be crawled because it does not seem to be able to expand (I won't do it anyway ..), If the answer is not all, it is useless to crawl.

 

Well, we need to find their location in the source code of the webpage:

 

That is, the content of the problem is included in

Then reply:

 

There are two places with the reply content, because the above content also includes

Code

Well, now we try to write the python code:

#-*-Coding: cp936 -*-
Import urllib2
From BeautifulSoup import BeautifulSoup

F = open('howtoTucao.txt ', 'w') # open the file

For pagenum in range (1st): # climb from page 20th to page

Strpagenum = str (pagenum) # str representation of the number of pages
Print "Getting data for Page" + strpagenum # The number of pages that have been crawled in the shell.
Url = "http://www.zhihu.com/collection/27109279? Page = "+ strpagenum # URL
Page = urllib2.urlopen (url) # Open a webpage
Soup = BeautifulSoup (page) # use BeautifulSoup to parse webpages
   
# Find all tags with the class attribute as the following two
ALL = soup. findAll (attrs = {'class': ['zm-item-title', 'zh-summary clearfix']})

For each in ALL: # enumerate ALL questions and answers
# Print type (each. string)
# Print each. name
If each. name = 'h2 ': # if the Tag is of the h2 type, this is a problem.
Print each. a. string # There is another question <a...>, so we need each. a. string to retrieve the content.
If each. a. string: # write only if it is not empty
F. write (each. a. string)
Else: # otherwise, write "No Answer"
F. write ("No Answer ")
Else: # if it is an answer, write the same
Print each. string
If each. string:
F. write (each. string)
Else:
F. write ("No Answer ")
F. close () # close the file

Although the code is not often used, I wrote it for half a day and started to encounter various problems.

Run

Then we can run it to crawl:

 

Result

After the operation is complete, open the file howtotucao.txt. You can see that the crawler is successful. The format may still be a bit problematic. It turns out that No Answer has No line breaks, so No Answer will be mixed into the text and add two line breaks.

Result: click me.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.