The Twisted.internet.error.DNSLookupError of Scrapy project

Source: Internet
Author: User
Tags subdomain

Windows 10 Home Chinese version, Python 3.6.4,scrapy 1.5.0,

Yesterday a crawler was written to crawl news data, but there was an error in crawling a website 's data: timeout, retry ... The start is more than the default waiting time of 180 seconds, and later in the bot changed to 20 seconds, so displayed as the seconds.

I don't know what's going on at all! The above is run with a program based on Crawlerrunner written in the Scrapy project, and no more data can be seen!

Try to change the allowed_domains in the crawler to the following two forms (and finally the second one) for testing--in relation to the subdomain : it still fails.

1 # allowed_domains = [' www.163.com ', ' money.163.com ', ' mobile.163.com ',2#                   ' News.163.com ', ' tech.163.com ']34 allowed_domains = ['163.com' ]

Later, the robots.txt protocol was closed in settings.py and cookies were enabled: it still failed.

1 # Obey robots.txt Rules 2 Robotstxt_obey = False34#  Disable cookies (enabled by default) 5 cookies_enabled = True

At this point, relying on the knowledge of the previous reserve is not able to solve the problem!

Using the Scrapy shell to test the page that gets the timeout , the results are twisted.internet.error. Exception information for Dnslookuperror :

Scrapy Shell "http://money.163.com/18/0714/03/DML7R3EO002580S6.html"

However, you can use the ping command to get the IP address of the subdomain that failed above:

Twisted as a very common Python library, how can this problem happen! Not at all!

Help the network! Finally, find the following article:

How does I catch errors with scrapy so I can does something when I get User Timeout error?

The best answer! What do you mean: define errback! in the request instance ( Please read three times )

So simple? What does it have to do with dnslookuperror errors? Why is it possible to define a callback function?

Do not want to understand, do not act ...

Keep searching, no more ...

Well, try this method and change the crawler of a website as follows:

Added Errback = self.errback_163, where the callback function errback_163 the same notation as in the reference article above (later found to be from scrapy frustration requests and responses).

1 yield response.follow (item, callback = self.parse_a_new,2                                   errback = Self.errback_ 163)

Ready to test the latest program with Scapy crawl (after restoring the previously modified configuration-complying with the robots.txt protocol, prohibiting cookies, allowed_domains setting to 163.com): successfully crawled the desired data!

Well, the problem is solved. However, before the question is still no solution ~ Follow-up again dig it! ~"Magical" errback! ~

The Twisted.internet.error.DNSLookupError of Scrapy project

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.