Python crawls Beijing Rental Information

Source: Internet
Author: User
Tags xpath

Rental Assistant

found that the screening method of the official website can not meet their own needs, so crawl the relevant sites to produce the present things to

Effect Preview, <a href= "https://virzc.com/2018/05/17/beijingrent/#more" target= "_blank" > Online preview </a>

# # #下面进行详细分析
First, the route and the name of the subway station to crawl the starting and ending sites.

1. Crawl 8684.cn Metro query:

  pattern = ‘http://bjdt.8684.cn/so.php?k=p2p&q={}&q1={}‘
    • q-> start position, Q1 is the target position. The HTML page is returned.

      2. Use Scrapy for XPath parsing to extract related HTML tags and values

    • An XPath expression, such as extracting a list of route schemes
            Selector(text=body).xpath(            "//div[@class=‘iContainer clear‘]/div[@class=‘iMain‘]/div[@class=‘transferMainShowWrap‘]/ul[@class=‘tms-mn tms-project‘]/li")            .extract()

3. After extracting the route scheme list, extract the site for each element in the list, refer to the extract a tag, but exclude the HTML value with the class attribute.

4. Place all the sites in a single list, while re-working on the list elements, remember to record route descriptions and distances, and define related objects to maintain.

Two. Search for rental information through the site

1. Crawl through the main website freely.

pattern=http://www.ziroom.com/z/nl/z2.html?qwd={}
    • QWD may need to be urlencode
    • Return the HTML page, you need to analyze the page, first extract the total number of pages, and then according to the total number of pages to request the remaining pages. The above URL attaches the Q parameter, the Q parameter is the page index, which is a separate page link

      2. Extract information for individual items on a single page.
      such as extracting a list of information for a single page

       ls = Selector(text=body).xpath("//ul[@id=‘houseList‘]/li").extract();
    • Individual processing of the list to extract information of interest.

      3. Process package return.
      For the extracted information, filter, for example, the price to filter, the size of the filter, not to do the sorting. Use the front-end framework for sorting. There's plenty of processing done in the background.

Three. Access to the public number

Can be connected to the public number, increase the influence of the public, specific public platform docking please refer to <a href= "https://github.com/zc1024/wxplatform/blob/master/weixin.py" target= " _blank "> Access github Open source projects </a>

Python crawls Beijing Rental Information

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.