Python3 crawler crawls Shenzhen Public rental housing Waiting library (Shenzhen Room Network)

Source: Internet
Author: User

Shenzhen Public rental housing waiting for the bank has been moving towards the scale of hundreds of thousands of people, which is the data up to October 16 ago, affixed to everyone experience under

As a result, the 10w+ was updated in 17.

Take this as a reptile practiced hand project today.

1. Environment Preparation:

Operating system: WIN10

Python version: python3.5.3

Development tools: Sublime 3

The libraries that Python needs to install:

  Anaconda not installed can go to https://mirrors.tuna.tsinghua.edu.cn/help/anaconda/here to download, domestic mirror relatively fast;

  requests Urllib's upgraded version packs all features and simplifies usage (point me to view official documents)

  BeautifulSoup is a python library that extracts data from HTML or XML files. It is able to use your favorite converter to achieve the usual document navigation, find, modify the way the document. (Click here to view official documents)

  LXML An HTML parsing package for assisting BeautifulSoup parsing Web pages

requests ,BeautifulSoup ,LXML Module installation method: Windows command Prompt window enter the following code to

Pip Install requestspip install Beautifulsoup4pip install lxml

  

Just stick to the code.

ImportRequests fromBs4ImportBeautifulSoupImportOSclassGongzufang ():#Get page Data    defAll_url (self,url): HTML=self.request (URL) all_a= BeautifulSoup (Html.text,'lxml'). Find ('Table', class_='Sort-tab'). Find_all ('TR')         forAinchAll_a:title= A.get_text ("|", strip=True)Print(title)#self.save_data (URL)    #get the address of a page    defhtml (self, URL): HTML=self.request (URL) max_span= BeautifulSoup (Html.text,'lxml'). Find ('Div', class_='Fix Pagebox'). Find_all ('a') [-3].get_text () forPageinchRange (1, int (max_span) + 1): Page_url= URL +'/'+'0-'+STR (page) +'-0-0-1'Self.all_url (Page_url)defSave_data (Self,data_url):#Download Data        Pass        #gets the response of the Web page and then returns    defrequest (self, URL): Headers= {'user-agent':'mozilla/5.0 (Windows NT 10.0; Win64; x64) applewebkit/537.36 (khtml, like Gecko) chrome/42.0.2311.135 safari/537.36 edge/12.10240','Connection':'keep-alive','Referer':'http://www.mzitu.com/tag/baoru/'} content= Requests.get (URL, headers=headers)returncontent#instantiation ofGongzufang =Gongzufang ()#to function All_url, HTML incoming parameters you can be used as a startup crawler (that is, the portal)Gongzufang.html ('HTTP://ANJU.SZHOME.COM/GZFPM') Gongzufang.all_url ('HTTP://ANJU.SZHOME.COM/GZFPM')

The results are as follows:

Python3 crawler crawls Shenzhen Public rental housing Waiting library (Shenzhen Room Network)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.