Beautiful Soup Third Party crawler Plugin

Source: Internet
Author: User
Tags xml parser

What is BeautifulSoup?

Beautiful Soup is a html/xml parser written in Python that handles non-canonical markup and generates a parse tree. It provides simple and common navigation (navigating), search and modify the parse tree operation. It can greatly save your programming time.

Installing beautiful Soup

Beautiful Soup: https://www.crummy.com/software/BeautifulSoup/bs4/download/4.4/

Unzip the downloaded beautifulsoup4-4.4.1.tar.gz into the beautifulsoup4-4.4.1 directory and execute the command: Python setup.py install

[Email protected]:~/soft/python-source/beautifulsoup4-4.4.1$ python setup.py installtraceback (most recent call Last):  'setup.py' in <module> from    Import  (importerror:no module named Setuptools[email protected]:

Error, need to install Setuptools

, https://pypi.python.org/pypi/setuptools#downloads, download the installation package below the page

$ tar-zxvf beautifulsoup4-4.4.1. tar.gz$ cd beautifulsoup4-4.4.1$ python setup.py install

Performing a BeautifulSoup installation

[Email protected]:~/soft/python-source/beautifulsoup4-4.4.1$ sudo python setup.py installrunning installrunning Bdist_eggrunning egg_infowriting requirements to Beautifulsoup4.egg-info/requires.txt ...

Under test

[Email protected]:~/soft/python-source/beautifulsoup4-4.4.1$ Pythonpython2.7.8 (Default, Oct 20 2014, 15:05:19) [GCC4.9.1] on Linux2type" Help","Copyright","credits" or "License"  forMore information.>>> fromBs4ImportBeautifulSoup>>>

Introducing the BeautifulSoup package into the Python environment, as shown above, demonstrates success.

See examples directly:

#!/usr/bin/python
#-*-Coding:utf-8-*-

From BS4 import BeautifulSoup

Html_doc = "" "
<body>
<p class= "title" ><b>the dormouse ' s story</b></p>

<p class= "Story" >once upon a time there were three Little sisters; and their names were
<a href= "Http://example.com/elsie" class= "sister" id= "Link1" >ELSIE</A>
<a href= "Http://example.com/lacie" class= "sister" id= "Link2" >Lacie</a> and
<a href= "Http://example.com/tillie" class= "sister" id= "Link3" >Tillie</a>;
And they lived at the bottom of a well.</p>

<p class= "Story" >...</p>

"""

Soup = BeautifulSoup (Html_doc)

Print Soup.title

Print Soup.title.name

Print soup.title.string

Print SOUP.P

Print Soup.a

Print Soup.find_all (' a ')

Print Soup.find (id= ' Link3 ')

Print Soup.get_text ()

The result is:

<title>the Dormouse ' s story</title>
Title
The Dormouse ' s story
<p class= "title" ><b>the dormouse ' s story</b></p>
<a class= "Sister" href= "Http://example.com/elsie" id= "Link1" >Elsie</a>
[<a class= "sister" href= "Http://example.com/elsie" id= "Link1" >ELSIE</A> <a class= "sister" href= "http ://example.com/lacie "id=" Link2 ">LACIE</A> <a class=" sister "href=" Http://example.com/tillie "id=" Link3 ">TILLIE</A>]
<a class= "Sister" href= "Http://example.com/tillie" id= "Link3" >Tillie</a>

The Dormouse ' s story
The Dormouse ' s story
Once upon a time there were three Little sisters; and their names were
Elsie,
Lacie and
Tillie;
And they lived at the bottom for a well.
...

You can see: soup is BeautifulSoup processing the formatted string, Soup.title get the title tag, SOUP.P get is the first p tag in the document, to want all the labels, you have to use Find_all

Function. The Find_all function returns a sequence that loops through it, and then gets the thought in turn.

Get_text () is the return text, which is the label of every BeautifulSoup processed object. You can try print soup.p.get_text ()

In fact, you can get other properties of the tag, such as I want to get the value of the href attribute of the A tag, can use print soup.a[' href ', similar to other properties, such as Class is also available (soup.a[' class ').

In particular, some special tags, such as head tags, can be obtained through soup.head, in fact, the front has already said.

How do I get an array of tagged content? Using the Contents property, you can use print soup.head.contents to get all child children under the head, returning the results as a list,

You can use [num] to get the label, use the. Name to get it.

Gets the label of the child, can also use children, but cannot print Soup.head.children no return list, return is <listiterator object at 0x108e6d150>

But using list can convert it to a list. Of course you can use the For statement to traverse the child inside.

With respect to the string property, if more than one label is passed, none is returned, otherwise a specific string of print soup.title.string is returned to the Dormouse ' s story

If you have more than one label, you can try strings

Look up can be used with the parent function, if you find all, then you can use the parents function

Find the next sibling use Next_sibling, find the previous sibling node using previous_sibling, if you are looking for all, then add s after the corresponding function can

How do I traverse a tree?

Using the Find_all function

Find_all (name, attrs, recursive, text, limit, **kwargs)

To illustrate:

Print Soup.find_all (' title ')
Print Soup.find_all (' P ', ' title ')
Print Soup.find_all (' a ')
Print Soup.find_all (id= "Link2")
Print Soup.find_all (id=true)

The return value is:

[<title>the dormouse ' s story</title>]
[<p class= "title" ><b>the dormouse ' s story</b></p>]
[<a class= "sister" href= "Http://example.com/elsie" id= "Link1" >ELSIE</A> <a class= "sister" href= "http ://example.com/lacie "id=" Link2 ">LACIE</A> <a class=" sister "href=" Http://example.com/tillie "id=" Link3 ">TILLIE</A>]
[<a class= "sister" href= "Http://example.com/lacie" id= "Link2" >LACIE</A>]
[<a class= "sister" href= "Http://example.com/elsie" id= "Link1" >ELSIE</A> <a class= "sister" href= "http ://example.com/lacie "id=" Link2 ">LACIE</A> <a class=" sister "href=" Http://example.com/tillie "id=" Link3 ">TILLIE</A>]

Through CSS lookup, directly on the example to:

Print Soup.find_all ("A", class_= "sister")
Print Soup.select ("P.title")

To find by property
Print Soup.find_all ("A", attrs={"class": "Sister"})

Find by text
Print Soup.find_all (text= "Elsie")
Print Soup.find_all (text=["Tillie", "Elsie", "Lacie"])

Limit the number of results
Print Soup.find_all ("a", limit=2)

The result is:

[<a class= "sister" href= "Http://example.com/elsie" id= "Link1" >ELSIE</A> <a class= "sister" href= "http ://example.com/lacie "id=" Link2 ">LACIE</A> <a class=" sister "href=" Http://example.com/tillie "id=" Link3 ">TILLIE</A>]
[<p class= "title" ><b>the dormouse ' s story</b></p>]
[<a class= "sister" href= "Http://example.com/elsie" id= "Link1" >ELSIE</A> <a class= "sister" href= "http ://example.com/lacie "id=" Link2 ">LACIE</A> <a class=" sister "href=" Http://example.com/tillie "id=" Link3 ">TILLIE</A>]
[u ' Elsie ']
[u ' Elsie ', U ' Lacie ', U ' Tillie ']
[<a class= "sister" href= "Http://example.com/elsie" id= "Link1" >ELSIE</A> <a class= "sister" href= "http ://example.com/lacie "id=" Link2 ">LACIE</A>]

In short, these functions allow you to find what you want.

---end---

Beautiful Soup Third Party crawler Plugin

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.