Web crawler, we can think of it as crawling on the network of a spider, the internet, such as a large network, and the crawler like a spider crawling up and down, meet the resources it can crawl it down.
Enter a URL in the browser, that is, to open a Web page, we can see that this page has a lot of text, pictures, etc., this process is actually user input a URL, sent a request to the server, the server after parsing, sent to the browser HTML, JS, CSS and other files, the browser after parsing, There are many words, pictures and so on. Therefore, the Web page we see is essentially made up of HTML code, but after the browser's interpretation, the code is hidden, the crawler to crawl is the content, through the analysis and filtering of these HTML code, to achieve the text, images and other resources to obtain.
Crawlers crawl data must have a clear URL to get the data, the URL is a Uniform Resource locator, is what we often say the URL. Crawling crawls Web pages, in fact, is based on the URL to get its web page information. For static web pages, there are two simple ways to crawl Web page information.
1. Call the Urlopen method inside the URILLIB2 library, pass in a URL (ie, url), after executing the Urlopen method, return a response object, return the information is saved in here, through the response object's Read method, return to get to the Web page content , the code is as follows:
1 Import Urllib2 2 3 response = Urllib2.urlopen ("http://www.cnblogs.com/mix88/")4 Print response.read ()
2. By constructing a Request object, the Urlopen method passes a request to fetch the Web page, with the following code:
1 Import Urllib2 2 3 request = Urllib2. Request ("http://www.cnblogs.com/mix88/")4 response = Urllib2.urlopen (Request)5print response.read ()
Python web crawler page crawl (a)