"Python" python2.x vs. python3.x contrast + indent error Resolution

Source: Internet
Author: User
Tags urlencode

Just list what I'm using, not all of them.

Key points to be drawn:

1. Urllib2 replaced with Urllib.request

2. Urllib.urlencode replaced with Urllib.parse.urlencode

3. Cookielib replaced with Http.cookiejar

4. Print "" in place of print ("")

5. Urllib2. Urlerror with Urllib.error.URLError instead

6. Urllib2. Httperror with Urllib.error.HTTPError instead

7. except urllib2. Urlerror, e: Replace with except Urllib.error.URLError as E:

Write code in python3.4.3 's own idle, often with indentation errors, difficult to find.

Solution: Copy into notepad++, the view shows spaces and tabs, and you can see clearly where the problem is.

Set the network request for the header, and the wording in the python2.x

ImportUrllibImporturllib2 URL='Http://www.server.com/login'user_agent='mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'Values= {'username':'Kzy','Password':'123'} headers= {'user-agent': user_agent} data=Urllib.urlencode (values) Request=Urllib2. Request (URL, data, headers) Response=urllib2.urlopen (Request) page= Response.read ()

The writing in the python3.x

ImportUrllib.parseImportUrllib.requesturl='http://www.baidu.com'user_agent='mozilla/5.0 (Windows NT 6.3; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/45.0.2454.93 safari/537.36'Values= {'username':'Kzy','Password':'123'}headers= {'user-agent': User_agent}data= Urllib.parse.urlencode (values). Encode (encoding='UTF8') #这里要指明编码方式request=urllib.request.Request (URL, data, headers) Response=urllib.request.urlopen (Request) page= Response.read ()

I am learning the static crawler tutorial, according to the basic part of the code written all over.

Tutorial Address: http://cuiqingcai.com/1052.html

The original code in the inside is 2.x, I wrote it all in 3.x. As follows:

ImportUrllib.parseImporturllib.request"""response = Urllib.request.urlopen ("http://www.baidu.com") print (Response.read ())""""""#设置了header和data的请求url = ' http://www.baidu.com ' user_agent = ' mozilla/5.0 (Windows NT 6.3; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/45.0.2454.93 safari/537.36 ' values = {' username ': ' kzy ', ' Password ': ' 123 '}headers = {' User-agent ': User_agent}data = Urllib.parse.urlencode (values). Encode (encoding= ' UTF8 ') request = Urllib.request.Request (URL, data, headers) response = Urllib.request.urlopen (Request) page = Response.read ()""""""#设置代理 Avoid forbidden access due to excessive number of accesses to an IP Enable_proxy = Trueproxy_handler = Urllib.request.ProxyHandler ({"http": '// some-proxy.com:8080 '}) Null_proxy_handler = Urllib.request.ProxyHandler ({}) if Enable_proxy:opener = Urllib.request.build_opener (proxy_handler) Else:opener = Urllib.request.build_opener (Null_proxy_handler) Urllib.request.install_opener (opener)""""""#设置Timeoutresponse = Urllib.request.urlopen (' http://www.baidu.com ', timeout = ten)""""""#使用http的 put or Delete method url = ' http://www.baidu.com ' request = urllib.request.Request (URL, data=data) request.get_ method = lambda: ' PUT ' #or ' DELETE ' response = Urllib.request.urlopen (Request)""""""#使用DebugLog Print the contents of the transceiver to the screen for easy debugging HttpHandler = Urllib.request.HTTPHandler (debuglevel=1) Httpshandler = Urllib.request.HTTPSHandler (debuglevel=1) opener = Urllib.request.build_opener (HttpHandler, Httpshandler) Urllib.request.install_opener (opener) response = Urllib.request.urlopen (' https://its.pku.edu.cn/netportal/ Netportal_utf-8.jsp ', timeout = 5)""""""#URLError异常处理from urllib.error Import urlerror, httperrorrequest = urllib.request.Request (' http://www.baidu.com '  ) Try:urllib.request.urlopen (Request, timeout = 5) except Httperror as E:print (' Error code: ', E.code) except Urlerror As E:print (' Reason: ', E.reason)""""""#URLError异常处理 property to determine the request = Urllib.request.Request (' https://its.pku.edu.cn/netportal/netportal_UTF-8.jsp ') try  : Urllib.request.urlopen (Request, timeout = 5) except Urllib.error.URLError as E:if hasattr (E, "code"): #hasattr Determine if a variable has a property print (E.code) if Hasattr (E, "Reason"): print (E.reason) else:print ("OK")""""""#获取cookie保存到变量import http.cookiejar# declares a Cookiejar object instance to save Cookiecookie = Http.cookiejar.CookieJar () # Use the Httpcookieprocessor object to create a cookie processor handler = urllib.request.HTTPCookieProcessor (cookie) #通过handler来构建openeropener = Urllib.request.build_opener (handler) #此处的open方法同urlopenresponse = Opener.open (' https://its.pku.edu.cn/netportal/ Netportal_utf-8.jsp ') for item in Cookie:print (' Name = ' +item.name) print (' Value = ' +item.value)""""""#获取cookie保存到文件import http.cookiejar# Set the saved file filename = ' cookie.txt ' #声明一个MozillaCookieJar对象实例来保存cookie, After writing the file cookie = http.cookiejar.MozillaCookieJar (filename) #创建cookie处理器handler = Urllib.request.HTTPCookieProcessor ( Cookie) #构建openeropener = Urllib.request.build_opener (handler) response = Opener.open ("https://its.pku.edu.cn/ Netportal/netportal_utf-8.jsp ") #保存到cookie文件cookie. Save (ignore_discard=true,ignore_expires=true)""""""#从文件中获取cookie并访问import http.cookiejar# Create Mozillacookiejar instance Object cookie = Http.cookiejar.MozillaCookieJar () # Read cookie content from file to variable cookie.load (' Cookie.txt ', ignore_discard=true,ignore_expires=true) #创建请求的requestreq = Urllib.request.Request (' https://its.pku.edu.cn/netportal/netportal_UTF-8.jsp ') #创建openeropener = Urllib.request.build_opener (Urllib.request.HTTPCookieProcessor (cookie)) response = Opener.open (req) Print ( Response.read ())"""#Simulation Login unsuccessfulImportHttp.cookiejarfilename='Cookie.txt'Cookies=Http.cookiejar.MozillaCookieJar (filename) opener=Urllib.request.build_opener (urllib.request.HTTPCookieProcessor (cookie)) PostData= Urllib.parse.urlencode ({'Stuid':'******','pwd':'******'}). Encode (encoding='UTF8')#How do we know the names are stuid and pwd??? Loginurl ='http://xxxxxx.com'result=Opener.open (loginurl, PostData) cookie.save (Ignore_discard=true, ignore_expires=True) Gradeurl='http://xxxxxx.com'result=Opener.open (Gradeurl)Print(Result.read ())

"Python" python2.x vs. python3.x contrast + indent error Resolution

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.