Python scrapy Crawler Framework installation, configuration and practice

Source: Internet
Author: User
Tags python scrapy

Recent research on the major types of vulnerabilities in the Android app industry. Wooyun is the most well-known domestic vulnerability reporting platform, summed up this vulnerability data on the back of the test and analysis of the vulnerability trend is instructive, so write a crawler.

Don't reinvent the wheel, use Python's scrapy framework to achieve it.

First, installation

When installing a 64-bit system, be sure to note that Python has the same number of bits as scrapy and its dependent location. Otherwise all kinds of pit daddy bug

    1. Install 32-bit Python 2.7

    2. Download and install PIP (easy to install management dependent libraries automatically)

      https://pypi.python.org/pypi/pip/7.1.2

      Download the source code, Python setup.py install installation

    3. Pip Install Scrapy

      Problem 1:error: ' Xslt-config ' is not an internal or external command, nor is it a running program

      Download a Lxml-3.5.0b1.win32-py2.7.exe installation,

      Https://pypi.python.org/pypi/lxml/3.5.0b1#downloads

    4. Run Demo code

      Encounter problem 2:exceptions. Importerror:no module named Win32API

      From http://sourceforge.net/projects/pywin32/files/pywin32/Build%20219/

      Download the PYWIN32 program here and install it.

Above, the frame is installed.

Second, how to write reptiles

Crawl Wooyun site vulnerability of the project, the directory structure is as follows:

Modify items.py (the container that fetches the data):

# -*-coding:gb2312-*- Import scrapy  from Import Item, Field class Website (scrapy. Item):    = field ()    = field ()

Write spider.py (as the name implies, the main features are implemented here):

#-*-coding:gb2312-*- fromScrapy.spidersImportCrawlspider,rule fromScrapy.linkextractors.sgmlImportSgmllinkextractor fromScrapy.selectorImportSelector fromWooyun.itemsImportWebsiteImportSyssys.stdout=open ('output.txt','WB')classWooyunspider (crawlspider): Name="Wooyun"Allowed_domains= ["wooyun.org"] Start_urls= [        "http://wooyun.org/bugs/",] rules=(Rule (Sgmllinkextractor ( allow=('bugs/page/([\w]+)', ),)),        #HTTP://WOOYUN.ORG/BUGS/PAGE/3Rule (Sgmllinkextractor ( allow=('bugs/wooyun-',)), callback='Parse_item'),    )    defParse_item (Self, Response): Sel=Selector (response) Items=[] Item=Website () item['title'] = Sel.xpath ('/html/head/title/text ()'). Extract () item['URL'] =response Items.append (item)returnItems

Crawl all of the vulnerability names and URLs above. Functions can be expanded according to requirements.

STR1 = Sel.xpath ('//h3[@class = ' Wybug_type ']/text ()'). Extract ()    "      str1 = "Vulnerability type:    design defect/Logic error"    

Python scrapy Crawler Framework installation, configuration and practice

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.