Environment: Win10,anaconda3 (python3.5)
Crawling Objects Web site: Chain Home Shanghai Rental
Method One: Use the requests to obtain the webpage information, then use the regular to fetch the data, and saves the result to the CSV file.
Code Address: Code
The data crawled is shown below:
From left to right are: House link, House description, House layout, house size, location, area of the region, number of floors, traffic information, viewing time, rent (/month), shelf time and how many people have seen the house at present.
Method Two: Use requests to obtain webpage information, then parse the data with BeautifulSoup, and save the result with MongoDB.
The main data crawled are: housing links, housing description, community, huxing, area, area, rent, traffic information, how many people have seen.
Chain Home only provides 100 pages of data, so only crawl the 100 page data.
Review element found each Li tag is a listing
In each Li tag, the house information is all in the Class=info-panel, so the data in the Class=info-panel is the block of data we need to crawl.
Using BeautifulSoup to parse data,
Soup = beautifulsoup (html, ' lxml ')
For item in Soup.select ('. Info-panel '):
Load data
Houseurl = Item.find ("H2"). a["href"]
title = Item.find ("H2"). a["title"]
。。。。
The data is then captured and stored in the database.
First generate a list of each item:
Yield {
' _id ': ID,
' Houseurl ': Houseurl,
' Housedescription ': title,
' Xiaoqu ': Xiaoqu,
' huxing ': huxing,
' Mianji ': Mianji,
' Area ': area,
' Sub_area ': Sub_area,
' Traffic ': Subway,
' Price ': Price,
' Data ': data,
' Watchedpersons ': Watched
}
Initialization settings MongoDB (Windows configuration MongoDB to: WIN10 configuration MongoDB and visualization Tools Robo 3T)
Client = Pymongo. Mongoclient (' mongodb://localhost:27017 ')
db_name = ' Lianjia_zufang_shanghai '
db = Client[db_name]
collection_set01 = db[' set01 ']
Then store the data in the database in turn
For item, index in Parse_one_page (HTML, index):
Collection_set01.save (item)
The data saved to the database is as follows:
Full code: Code
In short, is to write their own model class, all the data a brain involved in, and then save.
Just a small crawler program, the code is simpler to write, just can run.