Scraper -- BeautifulSoup and LXML, beautifulsouplxml
In addition to regular expressions, crawler parsing also includes the BeautifulSoup package and LXML module. We will introduce these two methods respectively.
1. BeautifulSoup package
Features are much more concise than regular expressions. However, because it is written in python, the speed will be slower.
# Data Capture-BeautifulSoup package ''' official documentation: invalid beautifulsoup packet processing error HTML format from bs4 import BeautifulSoupbroken_html = '<ul class = country> <li> Area <li> Population </ul> 'soup = BeautifulSoup (broken_html, "html. parser ") fixed_html = soup. pretup () # fixed the HTML format # print fixed_htmlul = soup. find ('ul ', attrs = {'class': 'country'}) # retrieve element # print ul. find ('lil') # print ul. find _ All ('lil') # use this method to extract national area data import urllib2def download (url, user_agent = "wswp", num_retries = 2): print "Download :", url headers = {"User_agent": user_agent} request = urllib2.Request (url, headers = headers) try: html = urllib2.urlopen (request ). read () failed t urllib2.URLError as e: print "Download Error:", e. reason html = None if num_retries> 0: if hasattr (e, "code") and 500 <= e. code <600: return dow Nload (url, user_agent, num_retries-1) return htmlif _ name _ = "_ main _": url = "http://example.webscraping.com/view/United-Kingdom-239" html = download (url) soup = BeautifulSoup (html, "html. parser ", from_encoding =" UTF-8 ") # first find the parent element tr = soup. find (attrs = {'id': 'places _ area _ row'}) # find the child element td = tr of the area. find (attrs = {'class': 'w2p _ fw '}) # area = td. text print area # conclusion: although the BeautifulSoup package is more regular Expressions are complex, but not difficult to understand, and easier to construct and understand. Finally, we use the BeautifulSoup package for convenience due to small changes in layout such as redundant spaces and label attributes.
2. LXML Module
This module has a CSS selector. You must install the cssselect package before using it. Otherwise, an error will occur!
# Data Capturing-The Lxml module ''' Lxml is a Python archive based on the XML parsing library libxml2. The parsing speed of this module is faster than that of the BeautifulSoup package, because, it is written in C language. ''' # Use the first step to parse illegal HTML into a uniform format. Import lxml.html import urllib2 ''' broken _ html = '<ul class = country> <li> Area <li> Population </ul>' # parse htmltree = lxml.html. fromstring (broken_html) fixed_html = lxml.html. tostring (tree, pretty_print = True) ''' # print fixed_htmldef download (url, user_agent = "wswp", num_retries = 2): print "Download :", url headers = {"User_agent": user_agent} request = urllib2.Request (url, headers = headers) try: html = urllib2.urlopen (request ). read () failed t urllib2.URLError as e: print "Download Error:", e. reason html = None if num_retries> 0: if hasattr (e, "code") and 500 <= e. code <600: return download (url, user_agent, num_retries-1) return htmlif _ name _ = "_ main __": url = "http://example.webscraping.com/view/United-Kingdom-239" html = download (url) tree = lxml.html. fromstring (html) td = tree.css select ('tr # places_area _ row> td. w2p_fw ') [0] # note that no cssselect package exists in the latest lxml module. You need to download pip install cssselect area = td separately. text_content () print area