First, the introduction of the expansion of the library
Since the beginning of learning reptiles, so from the Urllib library began
First introduced Urllib, which is mainly used in the Urllib request class
as ur
Second, set the global parameters
I divide it into three variables: Proxy server IP, destination URL, storage path.
" 110.183.238.145:811 " "https://www.baidu.com"# Sets the target document (path +"e:/ workspace/pycharm/codespace/books/python_web_crawler_book/chapter4/demo2/1.html"
Third, the crawler simulation into a browser access page
Because Urlopen does not support the advanced features of some HTTP, there are two ways to achieve the desired access effect.
One is to use Build_opener () to modify the header, and the second is to add a header using Add_header (). I am more inclined to the second, the use of the following methods
= Ur. Request (URL) req.add_header ('user-agent'mozilla/5.0 (Windows NT 10.0 ; WOW64; rv:52.0) gecko/20100101 firefox/52.0')
Iv. setting up the server Proxy
= Ur. Proxyhandler ({'http'= ur.build_opener (proxy, Ur. HttpHandler) Ur.install_opener (opener)
V. Crawling pages and archiving information
="wb") # Information transfer Fh.write (info) # Close file Fh.close ()
Vi. Source code:
1Import Urllib.request asur2 3 # The address of the proxy server4Proxy_add ="110.183.238.145:811"5 # Get destination URLs6URL ="https://www.baidu.com"7# Set the destination document (path +file name "include suffix")8Aim_file ="e:/workspace/pycharm/codespace/books/python_web_crawler_book/chapter4/demo2/1.html"9 Ten # Add Header Onereq =Ur. Request (URL) AReq.add_header ('user-agent','mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) gecko/20100101 firefox/52.0') - - # set up proxy theProxy = Ur. Proxyhandler ({'http': Proxy_add}) -Opener =ur.build_opener (proxy, Ur. HttpHandler) - Ur.install_opener (opener) - + # Read Data -data =Ur.urlopen (req). Read () + # file Point AFH = open (Aim_file,"WB") at # information shift - fh.write (data) - # Close File -Fh.close ()
Manual crawler Process Note 1 (python3)