Python Learning---crawler learning [scrapy Framework]

Source: Internet
Author: User

scrapy

Scrapy is a framework that helps us to create projects, run projects, help us to download, parse pages, support cookies and customize other features.

Scrapy is an application framework written to crawl Web site data and extract structural data. It can be used in a series of programs such as data mining, information processing or storing historical data. It was originally designed for page fetching (more specifically, network crawling) and could also be applied to get the data returned by the API (for example, Amazon Associates Web Services) or a generic web crawler. Scrapy can be used for data mining, monitoring and automated testing in a wide range of applications.

"More References" http://www.cnblogs.com/wupeiqi/articles/6229292.html

Scrapy Framework Introduction and Installation

Linux        pip3 Install scrapy     Windows        1.  PIP3 Install wheel          1-1 twisted            A. http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted, Download: TWISTED-17.1.0-CP35-CP35M-WIN_AMD64.WHL            b. Enter the directory where the file is located            c. pip3 install Twisted-17.1.0-cp35-cp35m-win_ AMD64.WHL                    2.  PIP3 Install Scrapy        3.  Scrapy on Windows relies on https://sourceforge.net/projects/pywin32/files/

Create a Scrapy project:

Create a Scrapy project:

Scrapy Startproject Scy

Scrapy Genspider Baidu Baidu.com

Baidu.py inside the content

Response.text can print specific content

Scrapy Crawl Baidu

Scrapy crawl Baidu--nolog [not print log]

Modify settting.py let spider not access Robot.txt file

Attached: View other templates for spider files

Scrapy Genspider--list

Introduction to Project structure and crawler applications

File Description:

· Scrapy.cfg the master configuration information for the project. (Real crawler-related configuration information in the settings.py file)

· items.py set up a data store template for structured data, such as the Django model

· Pipelines data processing behavior, such as: general structured persistence

· settings.py configuration files, such as: number of recursive layers, concurrency, delayed download, etc.

· Spiders crawler directories, such as: Create files, write crawler rules

Crawl joke net

#-*-Coding:utf-8-*-import scrapyfrom scrapy.selector import htmlxpathselectorfrom scrapy.http import Requestclass Xiao Huarspider (scrapy. Spider): name = "Xiaohuar" allowed_domains = ["xiaohuar.com"] start_urls = [' http://www.xiaohuar.com/list-1-0.htm L '] Visited_set = set () def parse (self, Response): Self.visited_set.add (Response.url) # 1. All the Queen of the current page crawl down # gets the div and the property is Class=item masonry_brick hxs = htmlxpathselector (response) Item_list = HxS. Select ('//div[@class = "Item Masonry_brick"] ') for item in ITEM_LIST:V = Item.select ('.//span[@class = "pri Ce "]/text ()"). Extract_first () print (v) # 2. Get http://www.xiaohuar.com/list-1-\d+.html in the current page, # page_list = Hxs.select ('//a[@href = "http://www.xiaohuar.com/list- 1-1.html "] page_list = Hxs.select ('//a[re:test (@href," http://www.xiaohuar.com/list-1-\d+.html ")]/@href ').        Extract () for URLs in Page_list:if URLs in Self.visited_set:        Pass else:obj = Request (url=url,method= ' GET ', callback=self.parse) yield Obj

View-source: http://www.521609.com/daxuexiaohua/

A small contrast between Django and scrapy frameworks

A small contrast between Django and scrapy frameworks

########## scrapy ######### #Djangodjango-admin startproject mysite  # Create a Django Project CD Mysitepython3 namage.py Startapp App01python3 namage.py startapp app02scrapy scrapy startproject scy          # Create Scrapy Engineering CD scyscrapy Genspider Chouti Chouti. Comscrapy Crawl Name--nolog    

Python Learning---crawler learning [scrapy Framework]

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.