Overview
First, what is a reptile?
If we compare the Internet to a large spider web, the data is stored in the webs of the various nodes, and the crawler is a small spider,
Crawling your prey (data) on the Web is a program that initiates requests to the Web site, and then analyzes and extracts useful data after the resource is obtained.
From the technical aspect is through the program simulation browser request site behavior, the site return HTML code/json data/binary data (image, video) to crawl to the local, and then extract the data they need to store up and use;
Overview
Second, the basic flow of reptiles:
How the user obtains network data:
Mode 1: Browser submission Request---> download page code---> Parse into page
Mode 2: Simulate browser send request (get web page code), extract useful data, store in database or file
Crawler to do is the way 2;
1. Initiating the request
Using the HTTP library to initiate a request to the target site,
Requests include: request header, request body, etc.
Request Module Defect: cannot execute JS and CSS code
2. Get Response Content
If the server responds properly, it will get a response
Response includes: Html,json, pictures, videos, etc.
3. Parsing content
Parsing HTML data: Regular expressions (re modules), third-party parsing libraries such as beautifulsoup,pyquery, etc.
Parsing JSON data: JSON module
Parsing binary data: Writing files in WB mode
4. Save data
Database (Mysql,mongdb, Redis)
File
Overview
Overview
Python crawler "first article": Crawler overview