Python3 crawler crawls China Book Network (Amoy Book Group) record

Source: Internet
Author: User
Tags webp

I was just beginning to learn the Python reptile small white, opened only for the record of their own learning process, convenient to do review

To crawl a link: http://tuan.bookschina.com/

To crawl content: Book name, book price, and link to a preview map

This article uses the PY packages: requests, BeautifulSoup, JSON, CVS

Open the Chinese Book Network Group purchase page, found that the site information is dynamically loaded:

Anyways, we start by trying to grab the first page of book information without thinking about loading more pages:

The browser used for this crawler is Chrome

So we open the browser developer mode F12, we can see the corresponding information of the page loading

In order to realize the Emulation browser login function, we need to view the header information:

Complete the corresponding code:

Header = {
' User-agent ': ' mozilla/5.0 (Windows NT 10.0; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/66.0.3359.139 safari/537.36 ',
' Host ': ' tuan.bookschina.com ',
' Referer ': ' http://tuan.bookschina.com/',
' Accept ': ' text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8 ',
' accept-encoding ': ' gzip, deflate ',
' Accept-language ': ' zh-cn,zh;q=0.9 '
}

The next thing we need to do is to analyze the DOM of the entire Chinese book network, to see what information we need and what tags are in it.

After a carpet search .... We find that the information we need is encapsulated in <ul id= ' taolist ' .....> 's child node Li

So, we're going to take BeautifulSoup's analytic crawl function to get the information we need in Li.

The corresponding code:

url = ' http://tuan.bookschina.com/'
Response = Requests.get (URL, headers = header) #模仿浏览器登录
response.encoding = ' Utf-8 '
Soup = BeautifulSoup (response.text, ' Html.parser ')
For item in Soup.select (' Div. taolistinner ul Li '):
Print (Item.select (' H2 ') [0].text) #返回对象为数组
Print (Item.select ('. Saleprice ') [0].text)
Print (Item.select (' img ') [0].get (' src ')) #get方法用来取得tab内部的属性值

First we need to call requests's Get method, get the response of the response, and then parse through BS, we will find that in the class named Taolistinner Div tag, encapsulates the ul we want to Li

Viewed BeautifulSoup's document, compared Find_all and select, decided to call the Select method to get the corresponding label, and then get the title under the corresponding H2 tag, the price under Saleprice class, and the IMG tag

Preview of the inside SRC link. This will allow us to print out the information of the books we want to display on the first page.

But the problem comes out ... What if we want to get more book information on the next page, because BS's Select method is only able to parse the static Dom's

So we suspect that the subsequent book data is loaded via Ajax or JS.

We come to the developer mode XHR below, we will find that every time we drop the scroll bar, refresh the book information, the follower will refresh a grouplist?..... The link

We open him up.

The discovery of surprises in previews, encapsulates the data we need, and is stored in JSON form.

So we're going to get this JSON data, we need to get his request URL.

The current URL is: Http://tuan.bookschina.com/Home/GroupList? Type=0&category=0&price=0&order=11&page=2&tyjson=true

We will find a regular, whenever there is a new book information refresh time, the URL of the page=? will also follow the increment

So the problem is solved .... We just have to go through the URL to get back the JSON to parse, we can get all the data we want

It also proves that many dynamically loaded Web sites will encapsulate JSON data as response, thus giving our crawlers a shortcut.

url = ' Http://tuan.bookschina.com/Home/GroupList? Type=0&category=0&price=0&order=11&page=2&tyjson=true '
Response = requests.get (URL)
result = Json.loads (Response.text)
BookInfo = {}
For data in result[' data ':
bookinfo[' bookname ') = data[' Book_name ']
bookinfo[' price '] = data[' Group_price ')
bookinfo[' iconlink ') = data[' Group_image ']
Print (URL)

In this case, the loads () method is called and the returned JSON data is converted to a python dictionary , which makes it easy to get the data

After we got the data, we decided to save the data to disk and generate CVS Excel files for further data analysis.

So, this reptile small experiment all code is as follows:

Import requests
From BS4 import BeautifulSoup
Import JSON
Import CSV

Def parse_one_page ():
Header = {
' User-agent ': ' mozilla/5.0 (Windows NT 10.0; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/66.0.3359.139 safari/537.36 ',
' Host ': ' tuan.bookschina.com ',
' Referer ': ' http://tuan.bookschina.com/',
' Accept ': ' text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8 ',
' accept-encoding ': ' gzip, deflate ',
' Accept-language ': ' zh-cn,zh;q=0.9 '
}
url = ' http://tuan.bookschina.com/'
Response = Requests.get (URL, headers = header) #模仿浏览器登录
response.encoding = ' Utf-8 '
Soup = BeautifulSoup (response.text, ' Html.parser ')
For item in Soup.select (' Div. taolistinner ul Li '):
Print (Item.select (' H2 ') [0].text) #返回对象为数组
Print (Item.select ('. Saleprice ') [0].text)
Print (Item.select (' img ') [0].get (' src ')) #get方法用来取得tab内部的属性值


def dynamtic_claw_data (page, headers, fileName):
For I in Range (page):
url = ' http://tuan.bookschina.com/Home/GroupList?Type=0&Category=0&Price=0&Order=11&Page= ' + str (
i) + ' &tyjson=true '
Response = requests.get (URL)
result = Json.loads (Response.text)
BookInfo = {}
For data in result[' data ':
bookinfo[' bookname ') = data[' Book_name ']
bookinfo[' price '] = data[' Group_price ')
bookinfo[' iconlink ') = data[' Group_image ']
Write_csv_rows (Filename,headers,bookinfo)
Print (URL)

def write_csv_headers (path, headers):
With open (path, ' A ', encoding= ' gb18030 ', newline= ') as F:
F_csv = csv. Dictwriter (f, headers)
F_csv.writeheader ()


def write_csv_rows (path, headers, rows):
With open (path, ' A ', encoding= ' gb18030 ', newline= ') as F:
F_csv = csv. Dictwriter (f, headers)
# If the write data is a dictionary, write a line, otherwise write multiple lines
If type (rows) = = Type ({}):
F_csv.writerow (rows)
Else
F_csv.writerows (rows)
def main (page):
# parse_one_page () #Tip: BeautifulSoup test
Csv_filename = "Bookinfo.csv"
headers = [' BookName ', ' price ', ' Iconlink ']
Write_csv_headers (Csv_filename,headers)
Dynamtic_claw_data (page, headers, csv_filename)

if __name__ = = ' __main__ ':
Main (#input) page num to start

Python3 crawler crawls China Book Network (Amoy Book Group) record

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.