Crawler Learning--Web parser beautiful Soup

Source: Internet
Author: User
Tags html form

I. Installation and testing of Beautiful soup

Official website: https://www.crummy.com/software/BeautifulSoup/

Beautiful soup installation and use documentation: https://www.crummy.com/software/BeautifulSoup/bs4/doc/

1. First Test whether the BS4 module is already present, if there is no installation can be, I use the Kali test found that the BS4 module already exists, the following describes how to test and install

New Python document enter the following code

1 Import BS4 2 Print BS4

Showing the result indicates that the BS4 module already exists, and other conditions need to be installed

The installation code is as follows

1 sudo apt-get  install python-pip23

Then you can test it later.

It will show up when it shows Beautiful Soup

Installation is complete

Second, the grammar of Beautiful soup

Find_all: Search for all nodes that meet the requirements

Find: Search for the first node that meets the requirements

The parameters are the same.

2. Search by node name, attribute value, text

3. Create the appropriate code for the beautiful Soup object

1  fromBs4ImportBeautifulSoup2 3 #Create Breautifulsoup objects based on HTML page strings4Soup =BeautifulSoup (5Html_doc,#HTML Document String6         'Html.parser'                      #HTML Parser7from_encoding='Utf-8'         #encoding of HTML documents8         )

4. Search node (find_all,find)

Find_all (name node, attrs node property, string node literal)

1 #method: Find_all (name,attrs,string)2 3 #Find all nodes labeled a4Soup.find_all ('a')5 6 #Find all tags as a, link to nodes in/view/123.html form7Soup.find_all ('a', href='/view/123.html')8Soup.find_all ('a', Href=re.compile (R'/view/\d+\.htm'))#BS can use regular expressions on names and attributes in the Find method to match the corresponding content9 Ten #Find all nodes labeled Div,class as ABC, text python OneSoup.find_all ('Div', class_='ABC', string='python')

5. Access node information after node

1 #假如 #node.name 5  # gets the href attribute of the A node that is found to access all the properties of the A node as a dictionary 7 node['href'  9 # Gets the link text for the A node found
Ten Node.get_text ()

By creating BS4 objects above, searching the DOM tree and accessing the contents of the node, you can implement the entire downloaded page

Parse and access for all nodes. The next post will give a complete sample code

Crawler Learning--Web parser beautiful Soup

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.