Python Scrapy framework installation tutorial on Linux, pythonscrapy

Source: Internet
Author: User
Tags python scrapy

Python Scrapy framework installation tutorial on Linux, pythonscrapy

This is an open-source tool for extracting website data. The Scrapy framework is developed using Python, which makes crawling fast, simple, and scalable. We have created a virtual machine (VM) in virtual box and installed Ubuntu 14.04 LTS on it.
Install Scrapy
Scrapy depends on Python, development library, and pip. The latest version of Python has been pre-installed on Ubuntu. Therefore, you only need to install the pip and python development libraries before installing Scrapy.

Pip is a substitute for easy_install, which is used to install and manage python packages. The installation of pip package is shown in Figure 1.

  sudo apt-get install python-pip

Figure: 1 pip Installation

We must use the following command to install the python development library. If the package is not installed, the python. h header file will be declared incorrectly when the scrapy framework is installed.

  sudo apt-get install python-dev

Figure: 2 Python Development Library

The scrapy framework can be installed either from the deb package or from the source code. In Figure 3, we installed the deb package with pip (Python Package Manager.

  sudo pip install scrapy 

Figure 3 Scrapy Installation

Figure 4 the successful installation of scrapy takes some time.

Figure 4 successfully installing the Scrapy framework
Use scrapy framework to extract data
Basic tutorial

We will use scrapy to extract the store name (card store) from fatwallet.com ). First, use the following command to create a scrapy project "store name", as shown in Figure 5.

$sudo scrapy startproject store_name

Figure 5 Scrapy framework new project

The preceding command creates a "store_name" directory in the current path. See Figure 6.

 $sudo ls –lR store_name

Figure: 6 store_name PROJECT CONTENT

A summary of each file/folder is as follows:

  • Scrapy. cfg is the project configuration file.
  • Another folder in the store_name/home directory. This directory contains the python code of the project.
  • Store_name/items. py contains the items that will be crawled by the spider.
  • Store_name/pipelines. py is a pipeline file.
  • Store_name/settings. py is the configuration file of the project.
  • Store_name/spiders/, including the spider used for crawling

Since we want to extract the store name from fatwallet.com, We will modify the file as follows (LCTT Description: Which file is not described here, the translator thinks it should be items. py ).

Import scrapy class StoreNameItem (scrapy. Item): name = scrapy. Field () # retrieve the name of the card store

Then we will write a new spider in the store_name/spiders/folder of the project. A spider is a python class that contains the following attributes:

  • Spider name (name)
  • Crawling the start url (start_urls)
  • Contains the parsing method for extracting the corresponding regular expression from the response. The parsing method is very important for crawlers.

We created a "storename. py" crawler under the storename/spiders/directory and added the following code to extract the store name from fatwallet.com. The output of the crawler is written to the file (storename.txt), as shown in figure 7.

  from scrapy.selector import Selector  from scrapy.spider import BaseSpider  from scrapy.http import Request  from scrapy.http import FormRequest  import re  class StoreNameItem(BaseSpider):  name = "storename"  allowed_domains = ["fatwallet.com"]  start_urls = ["http://fatwallet.com/cash-back-shopping/"]  def parse(self,response):  output = open('StoreName.txt','w')  resp = Selector(response)  tags = resp.xpath('//tr[@class="storeListRow"]|\       //tr[@class="storeListRow even"]|\       //tr[@class="storeListRow even last"]|\       //tr[@class="storeListRow last"]').extract()  for i in tags:  i = i.encode('utf-8', 'ignore').strip()  store_name = ''  if re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S):  store_name = re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S).group()  store_name = re.search(r">.*?<",store_name,re.I|re.S).group()  store_name = re.sub(r'>',"",re.sub(r'<',"",store_name,re.I))  store_name = re.sub(r'&',"&",re.sub(r'&',"&",store_name,re.I))  #print store_name  output.write(store_name+""+"\n")

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.