Requests library learning time, reported Max retries exceeded with URL error, online query said is, with the following workaround is useless, and later closed fiddler found on it, may be fiddler open the number of connections too much
too many HTTP connections are not caused by shutdown .
Workaround:
1. Increase the number of retry connections
requests.adapters.DEFAULT_RETRIES = 5
2. Close redundant connections
Requests uses the URLLIB3 library, the default HTTP connection is keep-alive, and the requests setting is false off.
Operation method
s = requests.session() s.keep_alive = False
Import Requests,json,unittest
def send_req (Url,method,data=none):
If method== ' GET ':
Requests.adapters.DEFAULT_RETRIES = 5
s = requests.session ()
S.keep_alive = False
Res=requests.get (Url=url). JSON ()
Return Json.dumps (res,indent=2)
Else
Res=requests.post (Url=url,data=data)
Return Res.json ()
Url=r ' https://www.baidu.com/home/xman/data/tipspluslist?indextype=manht&_req_seqid=0xe84d39f7000079b2& asyn=1&t=1535105478702&sid=26524_1442_21097_26921_22159 '
Res=send_req (URL, ' GET ')
Max retries exceeded with URL error