Python implements crawler data to Mongodb_mongodb

Source: Internet
Author: User
Tags mongodb xpath

In the above two articles have been introduced to the Python crawler and MongoDB, then I will crawl down the data into the MongoDB, first to introduce the site we will crawl, Readfree site, this site is very good, We only need to sign in every day for free download three books, conscience website, below I will be on the site of the daily recommended books to climb down.

Using the methods described in the previous articles, it is easy to find the name of the book and the author of the book in the source code of the Web page.

Once we've found it, we copy the XPath and then we extract it. The source code looks like this

# Coding=utf-8 Import re import requests from lxml import etree import pymongo import sys reload (SYS) SYS.SETDEFAULTENC Oding (' Utf-8 ') def getpages (URL, total): nowpage = Int (Re.search (' (\d+) ', URL, re.) S). Group (1)) URL = [] for I in range (nowpage, total + 1): link = re.sub (' (\d+) ', '%s '% i, URL, re. S) urls.append (link) return URL def spider (urls): html = requests.get (URL) selector = etree. HTML (html.text) book_name = Selector.xpath ('//*[@id = ' container ']/ul/li//div/div[2]/a/text () ') Book_author = Selector . XPath ('//*[@id = "container"]/ul/li//div/div[2]/div/a/text ()) Saveinfo (Book_name, Book_author) def saveinfo (Book_ Name, Book_author): Connection = Pymongo. Mongoclient () Bookdb = connection. BOOKDB booktable = bookdb.books length = Len (book_name) for I in range (0, length): Books = {} books[' name ' = str (book_name[i]). replace (' \ n ', ') books[' author ' = str (book_author[i)). replace (' \ n ', ') booktable.insert_one (books) If __name__ = = ' __main__ ': url = ' http://readfree.me/shuffle/?page=1 ' urls = getpages (url,3) for each in Urls:spider (ea

 Ch

Note that in the process of writing to the database, do not write the data in the dictionary to the database at once, as I wrote in the beginning, but I found that there were only three messages in the database and the rest of the information was gone. So a piece of writing is used.

There is also the beginning of the source code, the default encoding settings must not be omitted, otherwise it may be reported coding errors (really feel Python in coding this good error, embarrassment).

Some people may have discovered that I converted the extracted information to a string and then removed it using the Replace () method, because I found that there were line breaks before and after the extracted book information, which looked very unsightly.

A warm reminder, don't forget to run your Mongo DB when the program is running, and come down and see the results.

OK, so, if you find the code where there is a mistake or there is room for improvement, would like to leave a message to me, thank you.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.