Manual crawler Process Note 1 (python3)

Source: Internet
Author: User

First, the introduction of the expansion of the library

Since the beginning of learning reptiles, so from the Urllib library began

First introduced Urllib, which is mainly used in the Urllib request class

 as ur

Second, set the global parameters

I divide it into three variables: Proxy server IP, destination URL, storage path.

" 110.183.238.145:811 "  "https://www.baidu.com"# Sets the target document (path +"e:/ workspace/pycharm/codespace/books/python_web_crawler_book/chapter4/demo2/1.html"

Third, the crawler simulation into a browser access page

Because Urlopen does not support the advanced features of some HTTP, there are two ways to achieve the desired access effect.

One is to use Build_opener () to modify the header, and the second is to add a header using Add_header (). I am more inclined to the second, the use of the following methods

= Ur. Request (URL) req.add_header ('user-agent'mozilla/5.0 (Windows NT 10.0 ; WOW64; rv:52.0) gecko/20100101 firefox/52.0')

Iv. setting up the server Proxy

= Ur. Proxyhandler ({'http'= ur.build_opener (proxy, Ur. HttpHandler) Ur.install_opener (opener)

V. Crawling pages and archiving information

="wb") # Information transfer Fh.write (info) # Close file Fh.close ()

Vi. Source code:

1Import Urllib.request asur2 3 # The address of the proxy server4Proxy_add ="110.183.238.145:811"5 # Get destination URLs6URL ="https://www.baidu.com"7# Set the destination document (path +file name "include suffix")8Aim_file ="e:/workspace/pycharm/codespace/books/python_web_crawler_book/chapter4/demo2/1.html"9 Ten # Add Header Onereq =Ur. Request (URL) AReq.add_header ('user-agent','mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) gecko/20100101 firefox/52.0') -  - # set up proxy theProxy = Ur. Proxyhandler ({'http': Proxy_add}) -Opener =ur.build_opener (proxy, Ur. HttpHandler) - Ur.install_opener (opener) -  + # Read Data -data =Ur.urlopen (req). Read () + # file Point AFH = open (Aim_file,"WB") at # information shift - fh.write (data) - # Close File -Fh.close ()

  

Manual crawler Process Note 1 (python3)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.