Sometimes you will encounter crawling pages that need to be logged in, which brings a cookie.
Several ways to carry cookies are documented below
# Coding=utf-8import REQUESTSS = requests. Session () Login_data = {' username ': ' teacher ', ' password ': ' Teacher '}# method # resp1 = S.post (' http://192.168.2.132/login/' , data=login_data) # r = S.get (' http://192.168.2.132/personal_live/') # method * Resp1 = Requests.post (' http://192.168.2.132 /login/', Data=login_data) # print (' Cookie: ' + str (resp1.cookies)) # r = Requests.get (' Http://192.168.2.132/personal_ live/', cookies=resp1.cookies) # method 3# c = {' SessionID ': ' 3PS7OUYOX1L43ALCB7RAFXG9DTFNURCB '}# r = requests.get ('/HTTP/ 192.168.2.132/personal_live/', cookies=c) c = { '. Cnblogscookie ': ' d020d ... ', '. Cnblogs.AspNetCore.Cookies ': ' CFDJ ... WA ', ' syntaxhighlighter ': ' java ', ' serverid ': ' 560...34 '}r = Requests.get (' https://i.cnblogs.com/ Editposts.aspx?opt=1 ', cookies=c) resp = R.textprint (resp)
Method 1 is to use the same session, first login, and then access the restricted page.
Method 2 is to use requests directly, and 1, just like logging in first, getting a cookie, and carrying a cookie access restriction page.
Method 3 is to manually obtain the cookie from the browser, and then carry the cookie Access Restrictions page.
Advantages and Disadvantages
Almost, all directly run the script can be, but need to login page does not have the verification code type of processing.
3 can deal with various websites, but need to obtain cookies manually.
Python carries cookies to get page content