Python Crawler Primer-scrapy crawl unique Gallery HD Wallpapers # __python

Source: Internet
Author: User

First, go to the Unique Gallery, click on the HD wallpaper Item above:

After entering, pull down, find is normal drop pull no Ajax load, pull to the last side of the end, you can see that this column has a total of 292 pages:

Flip the page to see what happens to the URL, and you can see that only the last number representing the page number is changing:

Open F12, Refresh, in the original request code has access to enter the details page of the link address, you can crawl down:

Open any picture and go to the details page number, the same F12, and then refresh the page, there are several important information we want to extract, one is the total number of pages of detail page, one is the title, there is also a download of the original image, the source code in the raw request also can find the appropriate information:

There are two different cases, one is that the total number of pages for one does not need to page, and for the total is greater than one picture will be traversed again, you can find the page when the URL will change:
Http://www.mmonly.cc/gqbz/dmbz/xxxxx_i.html

After analysis, you can write code:

mmonly.py:

Import scrapy from scrapy.http import Request from Weiyiwang.items Import Weiyiwangitem class Mmonlyspider (scrapy. Spider): name = ' mmonly ' allowed_domains = [' mmonly.cc '] start_urls = [' Http://www.mmonly.cc/gqbz/list_41_{}.h Tml '. Format (i) for I in Range (1,293)] def parse (self, Response): Links=response.css ('. Item.masonry_brick.maso Nry-brick ') for link in links:detail_url = Link.css ('. Abox a::attr (HREF) "). Extract_first () pages=link.css ('. Items_likes::text '). Re_first (' All (. *) ') if PA Ges==1:url=detail_url yield Request (Url=url, Callback=self.parse_detail) Else
                    : For I in range (1,int (pages)): Url=detail_url.split ('. html ') [0]+ ' _{}.html '. Format (i) Yield Request (url=url,callback=self.parse_detail) def parse_detail (self,response): item=w Eiyiwangitem () item[' title ']=response.css ('. Wrapper.clearfix.imgtitleH1::text '). Extract_first () item[' Img_url ']=response.css ('. Big-pic a img::attr (src) '). Extract_first () yield Item

The final result is stored to MongoDB:

pipeline.py:

Import Pymongo
class Mongopipeline (object):
    def __init__ (self,mongo_uri,mongo_db):
        self.mongo_uri= Mongo_uri
        self.mongo_db=mongo_db

    @classmethod
    def from_crawler (cls,crawler): Return
        CLS (
            Mongo_uri=crawler.settings.get (' Mongo_uri '),
            mongo_db=crawler.settings.get (' mongo_db ')
    def Open _spider (self,spider):
        Self.client=pymongo. Mongoclient (Self.mongo_uri)
        self.db=self.client[self.mongo_db]

    def close_spider (self,spider):
        Self.client.close ()

    def process_item (self, item, spider):
        self.db[' Weiyi '].insert (item)
        Return item

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.