This is an open-source tool for extracting website data. The Scrapy framework is developed using Python, which makes crawling fast, simple, and scalable. We have created a virtual machine (VM) in virtualbox and installed Ubuntu14.04LTS on it. Installing ScrapyScrapy depends on Python, development library, and pip. The latest version of Python has been pre-installed on Ubuntu. Therefore, we only need to install pip and python before installing Scrapy.
This is an open-source tool for extracting website data. The Scrapy framework is developed using Python, which makes crawling fast, simple, and scalable. We have created a virtual machine (VM) in virtual box and installed Ubuntu 14.04 LTS on it.
Install Scrapy
Scrapy depends on Python, development library, and pip. The latest version of Python has been pre-installed on Ubuntu. Therefore, you only need to install the pip and python development libraries before installing Scrapy.
Pip is a substitute for easy_install, which is used to install and manage python packages. The installation of pip package is shown in Figure 1.
sudo apt-get install python-pip
Figure: 1 pip Installation
We must use the following command to install the python development library. If the package is not installed, the python. h header file will be declared incorrectly when the scrapy framework is installed.
sudo apt-get install python-dev
Figure: 2 Python Development Library
The scrapy framework can be installed either from the deb package or from the source code. We installed the deb package with pip (Python Package Manager.
sudo pip install scrapy
Figure 3 Scrapy Installation
It takes some time for scrapy to be successfully installed.
Figure 4 successfully installing the Scrapy framework
Basic Data Extraction tutorial using scrapy framework
We will use scrapy to extract the store name (card store) from fatwallet.com ). First, use the following command to create a scrapy project "store name.
$sudo scrapy startproject store_name
Figure 5 Scrapy framework new project
The preceding command creates a "store_name" directory in the current path. For the files/folders in the main directory of the project, see.
$sudo ls –lR store_name
Figure: 6 store_name PROJECT CONTENT
A summary of each file/folder is as follows:
- Scrapy. cfg is the project configuration file.
- Another folder in the store_name/home directory. This directory contains the python code of the project.
- Store_name/items. py contains the items that will be crawled by the spider.
- Store_name/pipelines. py is a pipeline file.
- Store_name/settings. py is the configuration file of the project.
- Store_name/spiders/, including the spider used for crawling
Since we want to extract the store name from fatwallet.com, We will modify the file as follows (LCTT Description: Which file is not described here, the translator thinks it should be items. py ).
import scrapy
classStoreNameItem(scrapy.Item):
Name = scrapy. Field () # retrieve the name of the card store
Then we will write a new spider in the store_name/spiders/folder of the project. A spider is a python class that contains the following attributes:
- Spider name (name)
- Crawling the start url (start_urls)
- Contains the parsing method for extracting the corresponding regular expression from the response. The parsing method is very important for crawlers.
We are in the store"Store" is created under the name/spiders/directory.Name. py "crawler, and add the following code to extract the store name from fatwallet.com. The crawler output is written to a file (StoreName.txt), See.
from scrapy.selector importSelector
from scrapy.spider importBaseSpider
from scrapy.http importRequest
from scrapy.http importFormRequest
import re
classStoreNameItem(BaseSpider):
name ="storename"
allowed_domains =["fatwallet.com"]
start_urls =["http://fatwallet.com/cash-back-shopping/"]
def parse(self,response):
output = open('StoreName.txt','w')
resp =Selector(response)
tags = resp.xpath('//tr[@class="storeListRow"]|\
//tr[@class="storeListRow even"]|\
//tr[@class="storeListRow even last"]|\
//tr[@class="storeListRow last"]').extract()
for i in tags:
i = i.encode('utf-8','ignore').strip()
store_name =''
if re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S):
store_name = re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S).group()
store_name = re.search(r">.*?<",store_name,re.I|re.S).group()
store_name = re.sub(r'>',"",re.sub(r'<',"",store_name,re.I))
store_name = re.sub(r'&',"&",re.sub(r'&',"&",store_name,re.I))
#print store_name
output.write(store_name+""+"\n")
Figure: 7 crawler output
Note: This tutorial is only intended to understand the scrapy framework.
For more information about Ubuntu, see Ubuntu special page http://www.linuxidc.com/topicnews.aspx? Tid = 2
This article permanently updates the link address: Http://www.linuxidc.com/Linux/2015-03/115306.htm