C # web crawler-multi-thread processing enhanced edition,

Source: Internet
Author: User

C # web crawler-multi-thread processing enhanced edition,

The last time I made a web crawler for my company's sister, it was not very delicate. I used it in this company project. So I made some changes and added the web site image collection and downloading functions, download images from the web page of the thread processing interface.

Let's talk about the train of thought: the Prime Minister obtains all the content of the initial website, collects images on the initial website, collects images on the initial website, and puts the collected links into the queue to continue collecting images, and then continues to collect links without limit.

Let's take a look at it. Let's go to the code!

 

The processing of webpage content crawling and webpage URL crawling have been improved. Let's take a look at the code below. There are some shortcomings. Please also!

HtmlCodeRequest,

GetHttpLinks is used to crawl the web site, and regular expressions are used to filter the Links in html.

GetHtmlImageUrlList for image capturing and filtering Img in html using regular expressions

Are written into an encapsulation class HttpHelper

/// <Summary> /// obtain the URLs of all images in HTML. /// </Summary> /// <param name = "sHtmlText"> HTML code </param> /// <returns> URL list of the image </returns> public static string HtmlCodeRequest (string Url) {if (string. isNullOrEmpty (Url) {return "";} try {// create a request HttpWebRequest httprequst = (HttpWebRequest) WebRequest. create (Url); // do not Create a persistent link httprequst. keepAlive = true; // sets the Request Method httprequst. method = "GET"; // set the header value httprequst. userAgent = "User-Agent: Mozilla/4. 0 (compatible; MSIE 6.0; Windows NT 5.2 ;. net clr 1.0.3705 "; httprequst. accept = "*/*"; httprequst. headers. add ("Accept-Language", "zh-cn, en-us; q = 0.5"); httprequst. servicePoint. expect100Continue = false; httprequst. timeout = 5000; httprequst. allowAutoRedirect = true; // whether 302 ServicePointManager is allowed. defaultConnectionLimit = 30; // get the response HttpWebResponse webRes = (HttpWebResponse) httprequst. getResponse (); // Obtain the response text stream string content = string. empty; using (System. IO. stream stream = webRes. getResponseStream () {using (System. IO. streamReader reader = new StreamReader (stream, System. text. encoding. getEncoding ("UTF-8") {content = reader. readToEnd () ;}// cancel the httprequst request. abort (); // return the data content return content;} catch (Exception) {return "";}} /// <summary> /// extract the Page Link /// </summary> /// <param name = "html"> </Param> // <returns> </returns> public static List <string> GetHtmlImageUrlList (string url) {string html = HttpHelper. htmlCodeRequest (url); if (string. isNullOrEmpty (html) {return new List <string> ();} // define a regular expression to match the img Tag Regex regImg = new Regex (@ "] *? \ Bsrc [\ s \ t \ r \ n] * = [\ s \ t \ r \ n] * ["']? [\ S \ t \ r \ n] * (?  [^ \ s \ t \ r \ n "" '<>] *) [^ <>] *? /? [\ S \ t \ r \ n] *> ", RegexOptions. ignoreCase); // search for the matched string MatchCollection matches = regImg. matches (html); List <string> sUrlList = new List <string> (); // obtain the Match List foreach (match Match in matches) sUrlList. add (match. groups ["imgUrl"]. value); return sUrlList ;} /// <summary> /// extract the Page Link /// </summary> /// <param name = "html"> </param> /// <returns> </returns> public static List <string> GetHttpLinks (string url) {// Obtain the URL content string html = HttpHelper. htmlCodeRequest (url); if (string. isNullOrEmpty (html) {return new List <string> () ;}// match http link const string pattern2 = @ "http (s )?: // ([\ W-] + \.) + [\ w-] + (/[\ w -./? % & =] *)? "; Regex r2 = new Regex (pattern2, RegexOptions. ignoreCase); // obtain the matching result MatchCollection m2 = r2.Matches (html); List <string> links = new List <string> (); foreach (Match url2 in m2) {if (StringHelper. checkUrlIsLegal (url2.ToString () |! StringHelper. isPureUrl (url2.ToString () | links. contains (url2.ToString () continue; links. add (url2.ToString ();} // matches the link const string pattern = @"(? I) <a \ s [^>] *? Href = (['""]?) (?! Javascript |__ doPostBack )(? <Url> [^ '"\ s * # <>] +) [^>] *>"; Regex r = new Regex (pattern, RegexOptions. ignoreCase); // obtain the matching result MatchCollection m = r. matches (html); foreach (Match url1 in m) {string href1 = url1.Groups ["url"]. value; if (! Href1.Contains ("http") {href1 = Global. WebUrl + href1;} if (! StringHelper. IsPureUrl (href1) | links. Contains (href1) continue; links. Add (href1);} return links ;}
There is a limit on the number of tasks for downloading images, which is 200. If the thread waits for more than 5 seconds, download the image here is the delegate for asynchronous calling.
Public string DownLoadimg (string url) {if (! String. IsNullOrEmpty (url) {try {if (! Url. contains ("http") {url = Global. webUrl + url;} HttpWebRequest request = (HttpWebRequest) WebRequest. create (url); request. timeout = 2000; request. userAgent = "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2 ;. net clr 1.0.3705 "; // whether 302 request is allowed. allowAutoRedirect = true; WebResponse response = request. getResponse (); Stream reader = response. getResponseStream (); // file name string aFirstName = Guid. newGuid (). toString (); // The extension string aLastName = url. substring (url. lastIndexOf (". ") + 1, (url. length-url. lastIndexOf (". ")-1); FileStream writer = new FileStream (Global. floderUrl + aFirstName + ". "+ aLastName, FileMode. openOrCreate, FileAccess. write); byte [] buff = new byte [512]; // the actual number of bytes read int c = 0; while (c = reader. read (buff, 0, buff. length)> 0) {writer. write (buff, 0, c);} writer. close (); writer. dispose (); reader. close (); reader. dispose (); response. close (); return (aFirstName + ". "+ aLastName);} catch (Exception) {return" error: Address "+ url;} return" error: the address is blank ";}

 

Let's not talk much about it. We need to improve it on our own! Welcome to communicate with the landlord. If this article is of reference value to you, you are welcome to recommend it at the bottom of the Article. Thank you.

The following source code is sent: Hey, you have to score!

Http://download.csdn.net/detail/nightmareyan/9627215

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.