Crawlers do two things.
① impersonate a computer to initiate request requests to the server
② receive server-side response content and parse to extract the required information
Internet pages are complicated, and a single request cannot get all the information. You need to design a crawler process.
This book mainly introduces two kinds of smooth ① multi-page crawler Process ② cross-page crawler process
Multi-page Crawler flow:
(1) Manually paging and observing the URL composition of each page to construct a URL to the list of all pages
(2) loop out the URL in turn based on the URL list
(3) Defining crawler functions
(4) Loop call crawler function, store data.
(5) The end of the cycle, the end of the reptile program.
A cross-page reptile program:
(1) Defining the feature URL of the Crawl Function crawl page (List page)
(2) Save the theme URL as a seed URL in the list
(3) Defining crawler functions
(4) Call the crawler function according to the seed URL to store the data.
(5) The end of the cycle, the end of the reptile program.
two different process differences: construct the URL list yourself, crawl the page URL list
The path of the small white Python crawler--The first knowledge of reptile principle