Basic knowledge of Python crawler

Source: Internet
Author: User
Tags urlencode

With the massive growth of data, we need to select the data we need on the Internet for our own analysis and experiments. This is the use of crawler technology, followed by a small series with the first encounter Python crawler!

One, request-response

When using the Python language to implement the crawler, the main use of Urllib and urllib2 two libraries. First, use a piece of code to illustrate the following:

1 Import Urllib 2 Import Urllib2 3 4 url="http://www.baidu.com"5 request=urllib2. Request (URL)6 response=urllib2.urlopen (Request)7Print Response.read ()

We know that a Web page is the skeleton of HTML, JS for the muscle, CSS for the composition of clothing. The above code to achieve the function of the Baidu Web page to crawl the source to local.

Where the URL is the URL of the Web page to crawl, and the request requests that the response is the response given after the request is accepted. Finally, the output of the read () function is the source code of the Baidu Web page.

Second, Get-post

Both are to pass data to the Web page, the most important difference is that the Get method is directly linked to the form of access, the link contains all the parameters, of course, if the password is included is an unsafe choice, but you can intuitively see what you submitted.

Post does not display all the parameters on the URL, but it is not very convenient if you want to see what is being submitted directly, and you can choose as appropriate.

Post mode:

1 ImportUrllib2 ImportUrllib23values={'username':'[email protected]','Password':'XXXX'}4Data=Urllib.urlencode (values)5Url='https://passport.csdn.net/account/login?from=http://my.csdn.net/my/mycsdn'6request=Urllib2. Request (Url,data)7Response=Urllib2.urlopen (Request)8 PrintResponse.read ()

Get mode:

ImportUrllibImporturllib2values={'username':'[email protected]','Password':'XXXX'}data=urllib.urlencode (values) URL="Http://passport.csdn.net/account/login"Geturl= URL +"?"+datarequest=Urllib2. Request (geturl) Response=Urllib2.urlopen (Request)PrintResponse.read ()

Third, exception handling

When handling exceptions, the Try-except statement is used.

1 Import Urllib2 2 3 Try : 4     Response=urllib2.urlopen ("http://www.xxx.com")5  Except  urllib2. Urlerror,e:6     print E.reason

Through the above introduction and code show, we have a preliminary understanding of the reptile process, I hope to be helpful to everyone.

Basic knowledge of Python crawler

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.