Python-crawler & Problem Solving & Thinking (1), python Crawler
Recently, I just got started with python and found some small tasks to train my hands. I hope I can continue to improve my problem solving skills in practice. This small crawler is a course from MOOC. Here I am recording the Problems and Solutions encountered during my learning process, as well as the thinking beyond the crawler.
This small task is to write a small crawler. The most important reason is that big data is too hot, just like the weather in Wuhan. Data is for "Big Data", like weapons for soldiers and bricks for tall buildings. Without data, "Big Data" is an air loft, which cannot be implemented and applied to reality. How can we get the data? There are two ways: one is self-Fetch and the other is self-fetch. You don't have to say much about self-Fetch. The other one is "he", which refers to the Internet.
First, you must understand crawlers: a program or script that automatically crawls World Wide Web information according to certain rules (from Baidu encyclopedia ). As the name suggests, you need to access the page, save the content in the page, filter the content you are interested in from the saved page, and store it separately. In real life, we often do this kind of thing: in a boring afternoon, we enter an address in the browser to access the page, and then encounter an article or paragraph of interest, select it, copy and paste it into a Word document. If we convert the preceding operations on a page into millions or tens of millions of pages, your data will become larger and larger, we call this process "Data Collection ".
Crawlers have the following advantages: automation and batch scaling. There is a misunderstanding here. Before I got into touch with crawlers, I thought crawlers could crawl things I couldn't see, later, I realized that crawlers were used to crawl things I couldn't see.
The architecture and crawling process of this crawler are as follows: