Recent research on the major types of vulnerabilities in the Android app industry. Wooyun is the most well-known domestic vulnerability reporting platform, summed up this vulnerability data on the back of the test and analysis of the vulnerability trend is instructive, so write a crawler.
Don't reinvent the wheel, use Python's scrapy framework to achieve it.
First, installation
When installing a 64-bit system, be sure to note that Python has the same number of bits as scrapy and its dependent location. Otherwise all kinds of pit daddy bug
Install 32-bit Python 2.7
Download and install PIP (easy to install management dependent libraries automatically)
https://pypi.python.org/pypi/pip/7.1.2
Download the source code, Python setup.py install installation
Pip Install Scrapy
Problem 1:error: ' Xslt-config ' is not an internal or external command, nor is it a running program
Download a Lxml-3.5.0b1.win32-py2.7.exe installation,
Https://pypi.python.org/pypi/lxml/3.5.0b1#downloads
Run Demo code
Encounter problem 2:exceptions. Importerror:no module named Win32API
From http://sourceforge.net/projects/pywin32/files/pywin32/Build%20219/
Download the PYWIN32 program here and install it.
Above, the frame is installed.
Second, how to write reptiles
Crawl Wooyun site vulnerability of the project, the directory structure is as follows:
Modify items.py (the container that fetches the data):
# -*-coding:gb2312-*- Import scrapy from Import Item, Field class Website (scrapy. Item): = field () = field ()
Write spider.py (as the name implies, the main features are implemented here):
#-*-coding:gb2312-*- fromScrapy.spidersImportCrawlspider,rule fromScrapy.linkextractors.sgmlImportSgmllinkextractor fromScrapy.selectorImportSelector fromWooyun.itemsImportWebsiteImportSyssys.stdout=open ('output.txt','WB')classWooyunspider (crawlspider): Name="Wooyun"Allowed_domains= ["wooyun.org"] Start_urls= [ "http://wooyun.org/bugs/",] rules=(Rule (Sgmllinkextractor ( allow=('bugs/page/([\w]+)', ),)), #HTTP://WOOYUN.ORG/BUGS/PAGE/3Rule (Sgmllinkextractor ( allow=('bugs/wooyun-',)), callback='Parse_item'), ) defParse_item (Self, Response): Sel=Selector (response) Items=[] Item=Website () item['title'] = Sel.xpath ('/html/head/title/text ()'). Extract () item['URL'] =response Items.append (item)returnItems
Crawl all of the vulnerability names and URLs above. Functions can be expanded according to requirements.
STR1 = Sel.xpath ('//h3[@class = ' Wybug_type ']/text ()'). Extract () " str1 = "Vulnerability type: design defect/Logic error"
Python scrapy Crawler Framework installation, configuration and practice