for convenience, under Windows I used the pycharm, personal feeling that this is an excellent Python learning software. Crawler, that is, web crawler, we can be understood as crawling on the internet has been spiders, the internet is likened to a large network, and the crawler is crawling on this web spider, if it encounters resources, then it will crawl down.
before learning python crawlers, learn some other knowledge:
(a) URL
URL, the Uniform Resource Locator, which is what we call the URL, the Uniform Resource Locator is a concise representation of the location and access methods of resources available from the Internet, and is the address of standard resources on the Internet. Each file on the Internet has a unique URL that contains information that indicates the location of the file and how the browser should handle it.
url format consists of three parts:
① The first part is the protocol (or service mode).
② the second part is the host IP address (and sometimes the port number) where the resource is stored.
③ The third part is the specific address of the host resource, such as the directory and file name
(ii) Urllib and URLLIB2 library
urllib and Urllib2 Library is the most basic library of learning Python crawler, using this library we can get the content of the Web page, and the content with regular expression to extract the analysis, to get the results we want.
(c) Regular expressions
A regular expression is a powerful weapon used to match strings. Its design idea is to use a descriptive language to define a rule for a string, and any string that conforms to the rule, we think it "matches", otherwise the string is illegal.
Python web crawler Learning notes (i)