Python-based code sharing

Source: Internet
Author: User
This article mainly introduces how to crawl, retrieve, and reply to simple crawler code sharing in Python. This article provides the "how to correctly vomit" favorites, it is a personal interest implementation. if you need a friend, you can refer to Zhihu and find a "how to correctly vomit" favorites. some God replies in it are really funny, however, it is a little troublesome to view pages on one page, and you need to open the web page every time. so I want to see if it looks nice if all the pages are crawled into a file, and I can see all of them at any time, so we started.

Tools

1. Python 2.7
2. BeautifulSoup

Analyze web pages

Let's take a look at the situation on this webpage.

Web site: it is easy to see that the Web site is regular, and the page is slowly increasing, so that all the Web sites can be crawled.

Let's take a look at the content we want to crawl:

We need to crawl two pieces of content: question and answer. The answer is limited to the answer that shows all the content. for example, the answer below cannot be crawled because it does not seem to be able to expand (I won't do it anyway ..), If the answer is not all, it is useless to crawl.

Well, we need to find their location in the source code of the webpage:

That is to say, if we find that the problem is included in the content, we will be able to find the problem in this tag.

Then reply:

There are two places where there are Reply content, because the content above also includes some other content, which is not convenient to handle. we crawl the content below, because the content in it is pure and pollution-free.

Code

Well, now we try to write the python code:

The code is as follows:


#-*-Coding: cp936 -*-
Import urllib2
From BeautifulSoup import BeautifulSoup

F = open('howtoTucao.txt ', 'w') # open the file

For pagenum in range (1st): # climb from page 20th to page

Strpagenum = str (pagenum) # str representation of the number of pages
Print "Getting data for Page" + strpagenum # The number of pages that have been crawled in the shell.
Url = "http://www.zhihu.com/collection/27109279? Page = "+ strpagenum # URL
Page = urllib2.urlopen (url) # Open a webpage
Soup = BeautifulSoup (page) # use BeautifulSoup to parse webpages

# Find all tags with the class attribute as the following two
ALL = soup. findAll (attrs = {'class': ['zm-item-title', 'zh-summary clearfix']})

For each in ALL: # enumerate ALL questions and answers
# Print type (each. string)
# Print each. name
If each. name = 'h2 ': # if the Tag is of the h2 type, this is a problem.
Print each. a. string # Another problem exists, so we need to retrieve the content from each. a. string.
If each. a. string: # write only if it is not empty
F. write (each. a. string)
Else: # otherwise, write "No Answer"
F. write ("No Answer ")
Else: # if it is an answer, write the same
Print each. string
If each. string:
F. write (each. string)
Else:
F. write ("No Answer ")
F. close () # close the file

Although the code is not often used, I wrote it for half a day and started to encounter various problems.

Run

Then we can run it to crawl:

Result

After the operation is complete, open the file howtotucao.txt. you can see that the crawler is successful. The format may still be a bit problematic. it turns out that No Answer has No line breaks, so No Answer will be mixed into the text and add two line breaks.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.