How to install the web crawler tool Scrapy on Ubuntu 14.04 LTS

Source: Internet
Author: User

How to install the web crawler tool Scrapy on Ubuntu 14.04 LTS

This is an open-source tool for extracting website data. The Scrapy framework is developed using Python, which makes crawling fast, simple, and scalable. We have created a virtual machine (VM) in virtual box and installed Ubuntu 14.04 LTS on it.

Install Scrapy

Scrapy depends on Python, development library, and pip. The latest version of Python has been pre-installed on Ubuntu. Therefore, you only need to install the pip and python development libraries before installing Scrapy.

Pip is a substitute for easy_install, which is used to install and manage python packages. The installation of pip package is shown in Figure 1.

  1. sudo apt-get install python-pip

Figure: 1 pip Installation

We must use the following command to install the python development library. If the package is not installed, the python. h header file will be declared incorrectly when the scrapy framework is installed.

  1. sudo apt-get install python-dev

Figure: 2 Python Development Library

The scrapy framework can be installed either from the deb package or from the source code. In Figure 3, we installed the deb package with pip (Python Package Manager.

  1. sudo pip install scrapy

Figure 3 Scrapy Installation

Figure 4 the successful installation of scrapy takes some time.

Figure 4 successfully installing the Scrapy framework

Basic Data Extraction tutorial using scrapy framework

We will use scrapy to extract the store name (card store) from fatwallet.com ). First, use the following command to create a scrapy project "store name", as shown in Figure 5.

  1. $sudo scrapy startproject store_name

Figure 5 Scrapy framework new project

The preceding command creates a "store_name" directory in the current path. See Figure 6.

  1. $sudo ls –lR store_name

Figure: 6 store_name PROJECT CONTENT

A summary of each file/folder is as follows:

  • Scrapy. cfg is the project configuration file.
  • Another folder in the store_name/home directory. This directory contains the python code of the project.
  • Store_name/items. py contains the items that will be crawled by the spider.
  • Store_name/pipelines. py is a pipeline file.
  • Store_name/settings. py is the configuration file of the project.
  • Store_name/spiders/, including the spider used for crawling

Since we want to extract the store name from fatwallet.com, We will modify the file as follows (LCTT Description: Which file is not described here, the translator thinks it should be items. py ).

  1. import scrapy
  2. classStoreNameItem(scrapy.Item):
  3. Name = scrapy. Field () # retrieve the name of the card store

Then we will write a new spider in the store_name/spiders/folder of the project. A spider is a python class that contains the following attributes:

  1. Spider name (name)
  2. Crawling the start url (start_urls)
  3. Contains the parsing method for extracting the corresponding regular expression from the response. The parsing method is very important for crawlers.

We are in the store"Store" is created under the name/spiders/directory.Name. py "crawler, and add the following code to extract the store name from fatwallet.com. The output of the crawler is written to the file (storename.txt), as shown in figure 7.

  1. from scrapy.selector importSelector
  2. from scrapy.spider importBaseSpider
  3. from scrapy.http importRequest
  4. from scrapy.http importFormRequest
  5. import re
  6. classStoreNameItem(BaseSpider):
  7. name ="storename"
  8. allowed_domains =["fatwallet.com"]
  9. start_urls =["http://fatwallet.com/cash-back-shopping/"]
  10. def parse(self,response):
  11. output = open('StoreName.txt','w')
  12. resp =Selector(response)
  13. tags = resp.xpath('//tr[@class="storeListRow"]|\
  14. //tr[@class="storeListRow even"]|\
  15. //tr[@class="storeListRow even last"]|\
  16. //tr[@class="storeListRow last"]').extract()
  17. for i in tags:
  18. i = i.encode('utf-8','ignore').strip()
  19. store_name =''
  20. if re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S):
  21. store_name = re.search(r"class=\"storeListStoreName\">.*?<",i,re.I|re.S).group()
  22. store_name = re.search(r">.*?<",store_name,re.I|re.S).group()
  23. store_name = re.sub(r'>',"",re.sub(r'<',"",store_name,re.I))
  24. store_name = re.sub(r'&amp;',"&",re.sub(r'&amp;',"&",store_name,re.I))
  25. #print store_name
  26. output.write(store_name+""+"\n")

Figure: 7 crawler output

Note: This tutorial is only intended to understand the scrapy framework.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.