Python-01 Spider principle

Source: Internet
Author: User

What can I do with python? Can do daily tasks, such as automatic backup of your MP3, can do website, many famous websites including YouTube is written by Python, can do online game backstage, many online game backstage are developed by Python. It's a lot of things to be able to do anyway.

Python, of course, does not have the ability to do things, such as writing the operating system, which can only be written in C language, mobile phone applications, only with Swift/objective-c (for the iphone) and Java (for Android), write 3D games, preferably in C or C + +.

If you are a small white user, the following conditions are met:

    • will use a computer, but never write a program;
    • Also remember the mathematics of junior High school equations and a little bit of algebra knowledge;
    • Want to from the programming of small white into a professional software architect;
    • Can spare half an hour of study every day
First, what is a reptile?

In short, the Internet is a site and network equipment consisting of a large network, we visit the site through the browser, the site of HTML, JS, CSS code back to the browser, the code through the browser parsing, rendering, will be a variety of web pages to present our eyes;

If we compare the Internet to a large spider web, the data is stored in the webs of the various nodes, and the crawler is a small spider,

Crawling your prey (data) on the Web is a program that initiates requests to the Web site, and then analyzes and extracts useful data after the resource is obtained.

From the technical aspect is through the program simulation browser request site behavior, the site return HTML code/json data/binary data (image, video) to crawl to the local, and then extract the data they need to store up and use;

Second, the basic flow of reptiles:

How the user obtains network data:

Mode 1: Browser submission Request---> download page code---> Parse into page

Mode 2: Simulate browser send request (get web page code), extract useful data, store in database or file

Crawler to do is the way 2;

1. Initiating the request

Using the HTTP library to initiate a request to the target site,

Requests include: request header, request body, etc.

Request Module Defect: cannot execute JS and CSS code

2. Get Response Content

If the server responds properly, it will get a response

Response includes: Html,json, pictures, videos, etc.

3. Parsing content

Parsing HTML data: Regular expressions (re modules), third-party parsing libraries such as beautifulsoup,pyquery, etc.

Parsing JSON data: JSON module

Parsing binary data: Writing files in WB mode

4. Save data

Database (Mysql,mongdb, Redis)

File

Iii. HTTP protocol Request and response

Request: The user sends their own information to the server (socket server) via the browser (socket client)

Response: The server receives the request, parses the request information from the user, and then returns the data (the returned data may contain other links, such as: pictures, js,css, etc.)

PS: After receiving response, the browser will parse its contents to display to the user, and the crawler can extract useful data from the browser after it sends the request and receives response.

Iv. Request

1. Request Method:

Common Request Method: Get/post

2. URL of the request

URL Global Uniform Resource Locator, used to define a unique resource on the Internet for example: A picture, a file, a video can be uniquely identified by the URL

URL encoding

https://www.baidu.com/s?wd= Pictures

The image will be encoded (see sample code)

The loading process for Web pages is:

Loading a Web page, usually loading the document documents first,

When the document is parsed, a link is encountered and a request to download the picture is initiated for the hyperlink

3. Request Header

User-agent: If there is no user-agent client configuration in the request header, the server may treat you as an illegal user host;

Cookies:cookie used to save login information

Iv. Request

1. Request Method:

Common Request Method: Get/post

2. URL of the request

URL Global Uniform Resource Locator, used to define a unique resource on the Internet for example: A picture, a file, a video can be uniquely identified by the URL

URL encoding

https://www.baidu.com/s?wd= Pictures

The image will be encoded (see sample code)

The loading process for Web pages is:

Loading a Web page, usually loading the document documents first,

When the document is parsed, a link is encountered and a request to download the picture is initiated for the hyperlink

3. Request Header

User-agent: If there is no user-agent client configuration in the request header, the server may treat you as an illegal user host;

Cookies:cookie used to save login information

Iv. Request

1. Request Method:

Common Request Method: Get/post

2. URL of the request

URL Global Uniform Resource Locator, used to define a unique resource on the Internet for example: A picture, a file, a video can be uniquely identified by the URL

URL encoding

https://www.baidu.com/s?wd= Pictures

The image will be encoded (see sample code)

The loading process for Web pages is:

Loading a Web page, usually loading the document documents first,

When the document is parsed, a link is encountered and a request to download the picture is initiated for the hyperlink

3. Request Header

User-agent: If there is no user-agent client configuration in the request header, the server may treat you as an illegal user host;

Cookies:cookie used to save login information

Parameters to note for the request header:

(1) Referrer: Where to access the source (some large sites, through the Referrer to do anti-theft chain strategy; all crawlers should also pay attention to simulation)

(2) User-agent: Access to the browser (to add otherwise it will be treated as a crawler)

(3) Cookie: Request head attention carry

4. The Request body

Request Body    If it is a get method, the request body has no content (GET request body placed in the parameters after the URL, directly see)    if it is the Post method, the request body is format Data    PS:    1, Login window, file upload, etc., Information will be attached to the request Body    2, login, enter the wrong user name password, and then submit, you can see the post, the correct login after the page will usually jump, unable to capture the post

V. Response response

1. Response Status Code

  200: On behalf of success

301: Rep Jump

404: File does not exist

403: No Access

502: Server Error

2. Respone Header


Parameters to be aware of in response headers:

(1) set-cookie:bdsvrtm=0; path=/: There may be multiple, is to tell the browser, save the cookie

(2) Content-location: The browser will revisit another page after the location is returned to the browser in the service-side response header

3, Preview is the source code of the Web page

JSO data

such as Web page HTML, picture

Binary data, etc.

Vi. Summary

1, summarize the crawler process:

Crawl---> Parse---> Storage

2, the crawler needs tools:

Request Library: Requests,selenium (can drive browser parsing render css and JS, but there is a performance disadvantage (useful useless pages will be loaded);)
Parse Library: Regular, Beautifulsoup,pyquery
Repositories: Files, Mysql,mongodb,redis

Knowledge Involved: Multi-threaded multiple processes

Compute-intensive tasks: Use multi-process, because Python can have Gil, multi-process can take advantage of the CPU multicore advantage;

IO-intensive tasks: Use multi-threading to do IO Switching to save task execution time (concurrency)

Thread pool

Above reference https://www.cnblogs.com/sss4/p/7809821.html

Python-01 Spider principle

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.