Python Crawler Primer (6): Use of cookies

Source: Internet
Author: User

Why use cookies?

Cookies, which are data stored on the user's local terminal (usually encrypted) by certain websites in order to identify users and perform session tracking.

For example, some sites need to log in to access a page, before you log in, you want to crawl a page content is not allowed. Then we can use the URLLIB2 library to save our registered cookies, and then crawl the other pages to achieve the goal.

Before we do, we must first introduce the concept of a opener.

1.Opener

When you get a URL you use a opener (a urllib2. Openerdirector instances). In front, we are all using the default opener, which is Urlopen. It is a special opener, can be understood as a special example of opener, the incoming parameters are just url,,timeout.

If we need to use cookies, it is not possible to use this opener, so we need to create more general opener to implement the cookie settings.

2.Cookielib

The primary role of the Cookielib module is to provide objects that store cookies to facilitate access to Internet resources in conjunction with the URLLIB2 module. The Cookielib module is very powerful, and we can use the object of the Cookiejar class of this module to capture cookies and resend them on subsequent connection requests, such as the ability to implement the impersonation login function. The main objects of the module are Cookiejar, Filecookiejar, Mozillacookiejar, Lwpcookiejar.

Their relationship: cookiejar--derived-->filecookiejar--derived-–>mozillacookiejar and Lwpcookiejar

1) Get cookie saved to variable

First, we first use the Cookiejar object to achieve the function of the cookie, into the variable, first to feel the

Python
12345678910111213 import urllib2 import cookielib #声明一个CookieJar对象实例来保存cookiecookie = cookielib. Cookiejar() #利用urllib2库的HTTPCookieProcessor对象来创建cookie处理器handler=urllib2. Httpcookieprocessor(cookie) #通过handler来构建openeropener = urllib2. Build_opener(handler) #此处的open方法同urllib2的urlopen方法, you can also pass in the requestresponse = opener. Open(' http://www.baidu.com ') for item in cookie: print ' Name = '+item. Name print ' Value = '+item. Value

We use the above method to save the cookie in the variable, and then print out the value in the cookie, the result is as follows

Python
12345678910 Name = baiduid Value = b07b663b645729f11f659c02aae65b4c:FG=1 Name = baidupsid Value = b07b663b645729f11f659c02aae65b4c Name = h_ps_pssid Value = 12527_11076_1438_10633 Name = bdsvrtm Value = 0 Name = bd_home Value = 0

2) Save cookies to file

In the above method, we save the cookie in the cookie variable, what if we want to save the cookie to a file? At this point, we need to use

Filecookiejar This object, here we use its subclass Mozillacookiejar to implement the cookie save xiamen forklift python technical Support

Python
123456789101112131415 import cookielib import urllib2 #设置保存cookie的文件, cookie.txt in a sibling directoryfilename = ' cookie.txt ' #声明一个MozillaCookieJar对象实例来保存cookie, then write the filecookie = cookielib. Mozillacookiejar(filename) #利用urllib2库的HTTPCookieProcessor对象来创建cookie处理器handler = urllib2. Httpcookieprocessor(cookie) #通过handler来构建openeropener = urllib2. Build_opener(handler) #创建一个请求, the principle of urlopen with Urllib2response = opener. Open("http://www.baidu.com") #保存cookie到文件cookies. Save(ignore_discard=true, ignore_expires=true)

The two parameters about the last Save method are described here:

The official explanations are as follows:

Ignore_discard:save even cookies set to is discarded.

Ignore_expires:save even cookie that has expiredthe file is overwritten if it already exists

Thus, ignore_discard means that even if the cookie is discarded, it will be saved, ignore_expires means that if the cookie already exists in the file, overwrite the original file, and here we set both to true. After the operation, the cookies will be saved to the Cookie.txt file, and we'll look at the contents as follows

3) obtain a cookie from the file and access

So we've already saved the cookie to the file, and if you want to use it later, you can use the following method to read the cookie and visit the website and feel

Python
12345678910111213 import cookielib import urllib2 #创建MozillaCookieJar实例对象cookie = cookielib. Mozillacookiejar() #从文件中读取cookie内容到变量cookies. Load(' cookie.txt ', ignore_discard=true, ignore_expires=true )#创建请求的requestreq = urllib2. Request("http://www.baidu.com") #利用urllib2的build_opener方法创建一个openeropener = urllib2. Build_opener(urllib2. Httpcookieprocessor(cookie)) response = opener. Open(req) Print response. Read()

Imagine, if our cookie.txt file is stored in a person login Baidu cookie, then we extract the contents of this cookie file, you can use the above method to simulate the person's account login Baidu.

4) Use cookies to simulate website Login

Below we take the example of our school, use the cookie to realize the simulation login, and save the cookie information to the text file, to feel the cookie Dafa!

Note: The password I changed Ah, don't sneak into the palace of the Elective system O (╯-╰) o

Python
1234567891011121314151617181920212223 import urllib import urllib2 import cookielib filename = ' cookie.txt ' #声明一个MozillaCookieJar对象实例来保存cookie, then write the filecookie = cookielib. Mozillacookiejar(filename) opener = urllib2. Build_opener(urllib2. Httpcookieprocessor(cookie)) postdata = urllib. UrlEncode({ ' stuid ':' 201200131012 ', ' pwd ':' 23342321 ' })#登录教务系统的URLloginurl = ' http://jwxt.sdu.edu.cn:7890/pls/wwwbks/bks_login2.login ' #模拟登录, and save the cookie to the variableresult = opener. Open(loginurl,postdata) #保存cookie到cookie. txtcookies. Save(ignore_discard=true, ignore_expires=true) #利用cookie请求访问另一个网址, this URL is the score query URLgradeurl = ' http://jwxt.sdu.edu.cn:7890/pls/wwwbks/bkscjcx.curscopre ' #请求访问成绩查询网址result = opener. Open(gradeurl) print result. Read()

The principle of the above procedure is as follows

Create a opener with a cookie, save the logged-in cookie when accessing the URL of the login, and then use this cookie to access other URLs.

such as log in to view the query Ah, this semester schedule AH and so on the Web site, the simulation login so realized, is not very cool?

Python Crawler Primer (6): Use of cookies

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.