1. Understanding the data on the Web page
- The main data on the Web page are:
- Need a mechanism to accept the data and parse it
- Need a mechanism for generating data and sending it
2. Parsing HTML
- Hierarchical data
- There are several third-party libraries that parse HTML, such as Lxml,beautifulsoup,htmlparser, and so on.
- Problems with parsing HTML:
- There is no uniform standard.
- Many Web pages do not follow HTML documents
2.1 BeautifulSoup
BeautifulSoup third-party libraries have the following features:
-Easy to use.
-Version 4 allows the use of lxml and html5lib to better handle nonstandard HTML.
-It is also more effective in processing coding.
Here is a comparison of the analytic methods and their pros and cons:
3 Code Examples
Enter the python environment from the terminal and experiment in the following ways. If you do not have a BS4 library, you can install it using the following command (under Ubuntu):
sudo pip install beautifulsoup4
>>>
>>> from BS4 import BeautifulSoup
>>>
>>> Import Urllib
>>> html = urllib.urlopen ("http://192.168.1.33/temwet/index.html")
>>>
>>> html
Addinfourl at 164618764 whose FP = Socket._fileobject object at 0x9cd19ac
>>> Html.code
200
>>>
Here's a look at the source code of the Web page:
Use BeautifulSoup to parse:
Use the statement to connect to the bt = BeautifulSoup(html.read(),"lxml") received HTML to parse, bt.title, bt.meta, bt.title.string, bt.find_all(‘meta‘) to find the element. Multiple results that are found can be stored and accessed as an array.
What if you want to extract hyperlinks that are contained in a webpage? We just need to find the "a" tag and extract it. links = bt.find_all(‘a‘)all hyperlinks in a Web page can be saved in links, and if they are len(links) equal to 0, there are no hyperlinks in the Web page, otherwise they can be accessed directly as an array.
Python parsing html