Python Crawler (2)

Source: Internet
Author: User

Cookies

Requests access to cookie information through session information

Five Elements of a cookie:

Name Value Domain path expires


Five key elements of printing cookies

Import Requestsurl = "http://www.hao123.com" s = requests.session () R = s.get (URL) print (r.cookies) for Cook in R.cookies:  Print (cook.name) print (cook.value) print (cook.domain) print (cook.path) print (cook.expires) print ("#" * 30)


Printing results:

<requestscookiejar[<cookie baiduid=c425edb0b83699c89bda3d02b25c53ba:fg=1 for. Hao123.com/>, <Cookie hz= 0 for. Www.hao123.com/>, <cookie ft=1 for Www.hao123.com/>, <cookie v_pg=normal for www.hao123.com/>]> baiduidc425edb0b83699c89bda3d02b25c53ba:fg=1.hao123.com/1548337791############################## hz0.www.hao123.com/none############################# #ft1www. hao123.com/1516809599############################# #v_pgnormalwww. hao123.com/none##############################

You will be able to make reasonable requests and visits to the website only if you have access to the cookies on the landing page.

How to access a website using known cookie information:

Import Requestsurl = ' http://httpbin.org/cookies ' r = Requests.get (URL, cookies={' key1 ': ' value1 ', ' key2 ': ' value2 '}) Print (R.text)

Results:

{"Cookies": {"Key1": "Value1", "Key2": "Value2"}}

Request to my IP address:

Import Requestsurl = "http://2017.ip138.com/ic.asp" s = requests.session () R = S.get (url=url) print (r.encoding) r.encoding = "GBK" Print (R.text)


Delegate Access:

At the time of collection to avoid being blocked IP, agent is often used.

Requests also has the corresponding proxies attribute.

West Thorn Agent

Import requestsproxies = {"http": "http://139.208.187.142:8118"}R1 = Requests.get ("http://2017.ip138.com/ic.asp", proxies=proxies) r1.encoding = "GBK" Print (R1.text)

Request Result:

This is required if the agent requires an account and password:

Proxies = {"http": "Http://user:[email protected]:3128/",}


Requests's Chinese garbled question:

Import Requestsparam = {"Key1": "Hello", "Key2": "World"}url = ' https://www.baidu.com/' r = Requests.get (url=url) print ( r.encoding) #ISO-8859-1 The default is to use this r.encoding = "utf-8" Print (R.text)

This will show up properly.


Summarize:

Requests provides you with all the interfaces, in the transmission of data, can be transmitted in the form of key:value, this is why special use of requests reason

If you use Urllib, then you are not so lucky, many things need you to deal with, not directly through the dict form of transmission, need to be loaded.


Python Crawler (2)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.