Python crawler from start to discard (18) scrapy Crawl All user-aware information (on)

Source: Internet
Author: User

The idea of crawling

First we should find an account, this account is concerned about the person and the people concerned are relatively more, is the top of the pyramid, and then by crawling the information of the account, then crawl his attention and the people who are concerned about the account information, and then crawl the attention of the person's account information and the attention of the information list, Crawl the information of these users and crawl through this recursive way all of the account information that is known. The entire process is represented by the following two graphs:

Reptile Analysis Process

Here we look for the account address is: https://www.zhihu.com/people/excited-vczh/answers
The main information for the large v account We crawl is:

Next we want to get a list of watchlist and watchlist for this account

Here we need to capture the information of these lists and the contents of the user's personal information by grasping the packet analysis
When we look at his list of followers we can see that he has requested the address in, and we can see that the result returned is a JSON data, and here is a page about the user information.

Although the above can obtain individual user's personal information, but is not particularly complete, this time we get a person's full information address is when we put the mouse on the user name above, we can see sent a request:

We can look at the return result of this address to know that this address is requested to obtain the user's detailed information:

Through the above analysis we know the following two addresses:
Get the address of the user watchlist: https://www.zhihu.com/api/v4/members/excited-vczh/followees?include=data%5b*%5d.answer_count% 2carticles_count%2cgender%2cfollower_count%2cis_followed%2cis_following%2cbadge%5b%3f (type%3Dbest_answerer)% 5d.topics&offset=0&limit=20

Get the address for individual user details: Https://www.zhihu.com/api/v4/members/cheng-cheng-78-35?include=allow_message%2Cis_followed%2Cis_ following%2cis_org%2cis_blocking%2cemployments%2canswer_count%2cfollower_count%2carticles_count%2cgender% 2cbadge%5b%3f (Type%3dbest_answerer)%5d.topics

Here we can find a problem from the two addresses requested, about the user information in the Url_token is actually to obtain a single user details of a voucher is also an important parameter of the request, and when we open the link of the followers to find the address of the request is the unique identifier is also this Url_ Token

Create a project for re-analysis

Create a project from a command
Scrapy Startproject Zhihu_user
CD Zhihu_user
Scrapy genspider Zhihu www.zhihu.com

Starting the crawler directly through the Scrapy crawl Zhihu will see the following error:

This problem is actually crawling the site often encountered problems, we see more later on know is how the matter, is the request for the head of the problem, should be in the request header add User-agent, in the settings configuration file has about the request header configuration default is commented, we can open, and add User-agent, as follows:

On how to get user-agent, you can see it in the request header of the grab packet or enter it in Google Browsing: chrome://version/view
So that we can access the code normally.
Then we can rewrite the first request, this scrapy article in front of us about spiders has said how to rewrite start_request, we let the first request to obtain the user list and obtain user information

This time we start the crawler again.

We will see is a 401 error, and the solution is actually the problem of the request header, from here we can also see that the request header contains a lot of information will affect the information we crawl this site, so when we often directly request the site can not access the time to look at the request header, See if it is the request header that is causing the result of the request, and here is the argument as shown:

So we need to add this parameter to the request header as well:

And then restart the crawler, this time we can get the normal content

To this basic analysis can be said to be analyzed well, the rest is the implementation of specific code, the next article Buchbinderei write specific implementation code content!

Python crawler from start to discard (18) scrapy Crawl All user-aware information (on)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.