Talk about Python and web crawlers.
1, the definition of reptiles
Crawler: A program that automatically crawls Internet data.
2, crawler's main frame
The main framework of the crawler, as shown, the crawler terminal through the URL Manager to obtain the URL to crawl the link, if there is a URL manager to crawl the URL link, the crawler scheduler calls the Web page downloader download the corresponding page, and then call the page parser to parse the page, and add a new URL to the URL manager in the Web page, which will have valuable data output.
3. The sequence diagram of the crawler
4. URL Manager
The URL Manager manages the collection of URLs to be crawled and the collection of crawled URLs, preventing duplicate crawls and loop fetching. The main functions of the URL Manager are as follows:
In the implementation of the URL manager, Python mainly uses memory (set), and relational database (MySQL). For small programs, typically implemented in memory, Python's built-in set () type automatically determines whether elements are duplicated. For larger programs, the database is generally used to implement.
5. Web Downloader
The Web page downloader in Python mainly uses the Urllib library, which is a Python-brought module. For the URLLIB2 library in the 2.x release, it is integrated into the urllib in the python3.x, in its request and other sub-modules. The Urlopen function in Urllib is used to open the URL and get the URL data. The parameters of the Urlopen function can be URL links, can also make the request object, for a simple Web page, directly using URL string parameters is sufficient, but for complex web pages, with anti-crawler web pages, and then use the Urlopen function, you need to add the HTTP header. For Web pages with a login mechanism, you need to set a cookie.
6. Web Parser
The page parser extracts valuable data and new URLs from the URL data downloaded to the Web page downloader. For data extraction, you can use methods such as regular expressions and beautifulsoup. Regular expressions use string-based fuzzy matching, which has a good effect on the target data with distinct characteristics, but the generality is not high. BeautifulSoup is a third-party module that is used for structured parsing of URL content. The content of the downloaded Web page is parsed into a DOM tree, which is part of the output of a Web page in the Baidu Encyclopedia that is captured using BeautifulSoup printing.
For the specific use of BeautifulSoup, in a later article to write again. The following code uses Python to crawl other league-related entries in the League of Legends in the Baidu Encyclopedia, and to save these entries in the new Excel. On the code:
Import XLWT from urllib.request import Urlopen excelfile=xlwt. Workbook () Sheet=excelfile.add_sheet (' League of Legend ') # # Baidu Encyclopedia: League of Legends # # Html=urlopen ("Http://baike.baidu.com/subview /3049782/11262116.htm ") Bsobj=beautifulsoup (Html.read ()," Html.parser ") #print (Bsobj.prettify ()) row=0 For node in Bsobj.find ("div", {"Class": "Main-content"}). FindAll ("div", {"Class": "Para"}): Links=node.findall ("a ", Href=re.compile (" ^ (/view/) [0-9]+\.htm$ "))) for link in Links: if ' href ' in link.attrs: print (link.attrs [' href '],link.get_text () ] sheet.write (row,0,link.attrs[' href ') sheet.write (Row,1,link.get_text ()) row=row+1 excelfile.save (' E:\Project\Python\lol.xls ')
The part of the output is as follows:
The Excel section is as follows:
The above is the whole content of this article, I hope you learn Python web crawler help.