Use C # To Create web crawlers and check site accessibility

Source: Internet
Author: User

A few days ago, my website encountered an inaccessible problem. The system monitoring program sent an alarm to the Administrator. The Administrator found me and told me that the site could not be accessed normally. Later, the problem was identified as a problem with the Server Load balancer. When I checked the site, I found that some images could not be correctly displayed because the image link was invalid.

Later, I was summing up this fault. The monitoring program can only detect several configuration links. It is impossible to detect all links of the entire site. Otherwise, a large number of configuration files will be written. If the link outside the configuration cannot be accessed, isn't it possible to inform the administrator? Moreover, the current monitoring program cannot check whether the page image can be properly displayed. If manual check is required every time, it is quite unrealistic. Can I write a small program to implement automatic check and send the check result to the relevant personnel in the form of an email? So I thought, you can use web crawlers to do this. Of course, this crawler is customized and only crawls the current site.

Create a console Program (other types of projects can also be) and name it webresourceinspector.

Create three new files: Inspector. CS, emailhelper. CS, config. xml. The result is as follows:

Inspector. CS is a monitoring class, which uses httpwebrequest for data crawling and analysis. Emailhelper. CS is a mail help class that sends emails. Config. XML is the configuration file for crawling the website. There is also a lot of configuration information written in APP. config.

Two DLL files are used in the project. One is log4net, which records logs. You can find many related articles online. The other is htmlagilitypack, which is used to parse the HTML data you crawled. It is quite powerful. You can operate the entire HTML as a Dom and use XPath to obtain link and IMG information.

Program implementation principle:

Use httpwebrequest and httpwebresponse to obtain the HTML information of the home page, and use htmlagilitypack and XPath to obtain all the labels and images. Put all links and image addresses in the unvisitedpageurllist queue to be checked. Check whether the current link has been checked. Only links starting with the domain name of the current site can be further crawled in the HTML of the page. The link information of the following page is obtained and placed in the queue to be checked. Wait until all links in the site are checked. Links starting with a domain name other than this site are only accessible. I use multiple threads in the program to improve the running efficiency. The number of threads is configured in the configuration file. Record all error logs and send an email to notify the administrator.

  

I have posted the Code. If you are interested, you can download it and view the source code. : Webresourceinspector.zip

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.