Web crawler (2)--Exception handling

Source: Internet
Author: User

In the previous section, a brief introduction to the study preparation of web crawlers was made, and a simple page crawl was taken as an example. However, the network is very complex, access to the site will not necessarily be successful, so you need to handle the exception in the crawl process, or the crawler will encounter abnormal conditions when the error stops running.

Let's look at the exceptions that may occur in Urlopen:
html = Urlopen ("http://www.heibanke.com/lesson/crawler_ex00/")

There are two main possible exceptions to this line of code:

1. The Web page does not exist on the server (or error occurred when fetching the page)
2. Server does not exist
When the first exception occurs, the program returns an HTTP error, and the Urlopen function throws an "Httperror" exception.
The second exception, Urlopen, returns a None object.
after processing the two exceptions, the code in the previous section is as follows:
1 __author__='f403'2 #coding = Utf-83  fromUrllib.requestImportUrlopen4  fromUrllib.errorImportHttperror5  fromBs4ImportBeautifulSoup6 7 Try:8html = Urlopen ("http://www.heibanke.com/lesson/crawler_ex00/")9    ifHtml isNone:Ten       Print("Url is not found") One    Else: ABsobj = BeautifulSoup (HTML,"Html.parser") -       Print(BSOBJ.H1) - exceptHttperror as E: the    Print(e)

After adding exception handling, you can handle the exception that occurs in Web page access and ensure that the Web page is successfully retrieved from the server. However, this does not guarantee that the content of the Web page is consistent with our expectations, as in the above procedure, we cannot guarantee that the H1 tag must exist, so we need to consider such exceptions.

this type of exception can also be divided into 2 categories:1. Access a non-existent label2. Access a child label for a non-existing labelwhen the first case occurs, BeautifulSoup returns a None object, and the second case throws Attributeerror. after adding this part of the exception handling, the code is:
1 __author__='f403'2 #coding = Utf-83  fromUrllib.requestImportUrlopen4  fromUrllib.errorImportHttperror5  fromBs4ImportBeautifulSoup6 7 Try:8html = Urlopen ("http://www.heibanke.com/lesson/crawler_ex00/")9    ifHtml isNone:Ten       Print("Url is not found") One    Else: ABsobj = BeautifulSoup (HTML,"Html.parser") -       Try: -t =bsobj.h1 the          ifT isNone: -             Print("tag is not exist") -          Else: -             Print(t) +       exceptAttributeerror as E: -          Print(e) + exceptHttperror as E: A    Print(e)

From for notes (Wiz)



Web crawler (2)--Exception handling

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.