Problem:
When you execute the following statement
1 def set_iplsit (): 2 ' https://www.whatismyip.com/ ' 3 Response = urllib.request.urlopen (URL)4 html = Response.read (). Decode (' Utf-8')
The following exception occurred:
C:\Users\54353\AppData\Local\Programs\Python\Python36\python.exe"c:/users/54353/pycharmprojects/untitled/Crawler/image-a website. PY"Traceback (most recent): File"c:/users/54353/pycharmprojects/untitled/Crawler/image-a website. PY", Line 100,inch<module>IP=set_iplsit2 () File"c:/users/54353/pycharmprojects/untitled/Crawler/image-a website. PY", line 95,inchset_iplsit2 Response=ure.urlopen (URL) File"C:\Users\54353\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", line 223,inchUrlopenreturnopener.open (URL, data, timeout) File"C:\Users\54353\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", line 532,inchOpen Response=Meth (req, response) File"C:\Users\54353\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", line 642,inchHttp_response'http', request, response, code, MSG, HDRs) File"C:\Users\54353\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", line 570,inchErrorreturnSelf._call_chain (*args) File"C:\Users\54353\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", Line 504,inch_call_chain Result= Func (*args) File"C:\Users\54353\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", line 650,inchHttp_error_defaultRaisehttperror (Req.full_url, Code, MSG, HDRs, FP) Urllib.error.HTTPError:HTTP error403: Forbiddenprocess finished with exit code1
Analysis:
The above exception is because the Urllib.request.urlopen way to open a URL, the server will only receive a simple request for access to the page, but the server does not know to send this request using the browser, operating system, hardware platform and other information, Requests for missing this information are often non-normal access, such as crawlers.
Some websites, in order to prevent this abnormal access, will verify the useragent in the request information, and if the useragent exists or does not exist, then this request will be rejected.
Workaround:
Add useragent to the request and the code is as follows
1 headers = {'user-agent':'mozilla/5.0 (Windows NT 6.1; WOW64; rv:23.0) gecko/20100101 firefox/23.0'} 2 req = Urllib.request.Request (url= Chaper_url, headers=headers) 3 urllib.request.urlopen (req). Read ()
Python "HTTP Error 403:forbidden"