C # Crawl data from websites that need to log on

Source: Internet
Author: User

Go

C # Crawl data from websites that need to log on

background: Yesterday a student of finance, let me help her to crawl data from a Web site, and then exported to Excel, a cursory look at the following 1000+ records, manual statistics words is really impossible. Although not, but as a computer science, I still have the cheek to agree. .

The first thought is to send a GET request directly, and then parse the returned HTML will not be able to get the required information? Sure, if it's a site that doesn't need to be signed in, it works, but it won't work for this site. So the first thing we need to do is grab the packet, which is to analyze the post request that the browser sends to the server when the user logs in. Many browsers have their own grab kits, but I prefer [HttpWatch]

grasping the package process :

1. Installing HttpWatch

2. Enter the login page of the website using IE browser

3. Open HttpWatch record to start tracking

4. Enter the account password, confirm login, get the following data:


Focus on the url and postdata and the server returned cookies


cookie contains login information, for insurance purposes, we can pass these 4 cookie values to the server.


First , the code that sends the POST request to C #: ( to get the cookie returned by the server)

        String url = "url"; String postdatastr = "POST Data";//Because all of the above are discrete key-value pairs, we can find POSTDATASTR//Login and get cookie HttpPost (URL, POST        DATASTR, ref cookie); private string HttpPost (String Url, String postdatastr, ref Cookiecontainer cookie) {HttpWebRequest req            Uest = (HttpWebRequest) webrequest.create (URL); Request.            Method = "POST"; Request.            ContentType = "application/x-www-form-urlencoded";            byte[] PostData = Encoding.UTF8.GetBytes (POSTDATASTR); Request.            ContentLength = Postdata.length; Request.            Cookiecontainer = cookie; Stream Myrequeststream = Request.            GetRequestStream ();            Myrequeststream.write (postdata, 0, postdata.length);            Myrequeststream.close (); HttpWebResponse response = (HttpWebResponse) request.            GetResponse (); Response. Cookies = cookies. GetCookies (response.            ResponseUri); Stream Myresponsestream = Response. GetResponseStream();            StreamReader Mystreamreader = new StreamReader (Myresponsestream, encoding.getencoding ("Utf-8"));            String retstring = Mystreamreader.readtoend ();            Mystreamreader.close ();            Myresponsestream.close ();        return retstring; }

With a cookie, you can grab the data you need from the site, and the next step is to send a GET request

         private string HttpGet (String Url, String postdatastr, Cookiecontainer cookie)        {            HttpWebRequest request = ( HttpWebRequest) WebRequest.Create (Url + (Postdatastr = = ""? "": "?") + Postdatastr);            Request. Method = "GET";            Request. ContentType = "Text/html;charset=utf-8";            Request. Cookiecontainer = cookie;            HttpWebResponse response = (HttpWebResponse) request. GetResponse ();            Stream Myresponsestream = Response. GetResponseStream ();            StreamReader Mystreamreader = new StreamReader (Myresponsestream, encoding.getencoding ("Utf-8"));            String retstring = Mystreamreader.readtoend ();            Mystreamreader.close ();            Myresponsestream.close ();            return retstring;        }

Because the server is returning HTML, how can you quickly get the information you need from a lot of HTML? Here, we can cite an efficient and powerful third-party library Nsoup (online is also recommended to use Htmlparser, but through my personal comparison found that htmlparser both in terms of efficiency or simplicity, are far less than nsoup)

Because of the online for Nsoup tutorial comparison, you can also refer to Jsoup Tutorial: http://www.open-open.com/jsoup/

Finally, I'll give some of the data I grabbed from the site:



C # Crawl data from websites that need to log on

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.