Python beautiful Soup analysis webpage

Source: Internet
Author: User

Beautiful soup is an HTML/XML Parser written in Python. It can well process nonstandard tags and generate a parse tree ). It provides simple and commonly used navigating, searching and modifying the profiling tree. It can greatly save your programming time.

When using python to develop web page analysis functions, you can use the Library's web page parsing functions, which is much easier than writing regular expressions by yourself. You need to introduce modules when using them, as shown below:

Import the beautiful Soup library in the program:

from BeautifulSoup import BeautifulSoup          # For processing HTMLfrom BeautifulSoup import BeautifulStoneSoup     # For processing XMLimport BeautifulSoup                             # To get everything

 

Beautiful soup is better at HTML processing and not perfect for XML processing, as shown below:

 

#! /usr/bin/python#coding:utf-8
from BeautifulSoup import BeautifulSoupimport redoc = ['

The output is as follows:

# 

Of course, it has powerful functions. The following is an example of extracting title from a webpage, as shown below:

 

#!/usr/bin/env python#coding:utf-8import Queueimport threadingimport urllib2import timefrom BeautifulSoup import BeautifulSouphosts = ["http://yahoo.com", "http://google.com", "http://amazon.com","http://ibm.com"]queue = Queue.Queue()out_queue = Queue.Queue()class ThreadUrl(threading.Thread):    """Threaded Url Grab"""    def __init__(self, queue, out_queue):        threading.Thread.__init__(self)        self.queue = queue        self.out_queue = out_queue    def run(self):        while True:            #grabs host from queue            host = self.queue.get()            #grabs urls of hosts and then grabs chunk of webpage            url = urllib2.urlopen(host)            chunk = url.read()            #place chunk into out queue            self.out_queue.put(chunk)            #signals to queue job is done            self.queue.task_done()class DatamineThread(threading.Thread):    """Threaded Url Grab"""    def __init__(self, out_queue):        threading.Thread.__init__(self)        self.out_queue = out_queue    def run(self):        while True:            #grabs host from queue            chunk = self.out_queue.get()            #parse the chunk            soup = BeautifulSoup(chunk)            print soup.findAll(['title'])            #signals to queue job is done            self.out_queue.task_done()start = time.time()def main():    #spawn a pool of threads, and pass them queue instance    for i in range(5):        t = ThreadUrl(queue, out_queue)        t.setDaemon(True)        t.start()    #populate queue with data    for host in hosts:        queue.put(host)    for i in range(5):        dt = DatamineThread(out_queue)        dt.setDaemon(True)        dt.start()    #wait on the queue until everything has been processed    queue.join()    out_queue.join()main()print "Elapsed Time: %s" % (time.time() - start)

This example uses multiple threads and queues. The queue can simplify multi-thread development, namely, the principle of separation and governance. A thread has only one independent function. Data is shared through the queue to simplify the program logic. The output result is as follows:

 

[<title>IBM - United States</title>][<title>Google</title>][<title>Yahoo!</title>][<title>Amazon.com: Online Shopping for Electronics, Apparel, Computers, Books, DVDs & more</title>]Elapsed Time: 12.5929999352

English document: http://www.crummy.com/software/BeautifulSoup/documentation.zh.html

Official Address: http://www.crummy.com/software/BeautifulSoup/#Download/
 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.