1. Crawler classification: Distributed and standalone
Distributed is mainly Apache Nutch framework, Java implementation, rely on the operation of Hadoop, learning difficult, generally used only to do search engine development.
Java stand-alone frameworks are: WebMagic and Webcollector, and crawler4j
Python stand-alone frame: Scrapy and Pyspider
2. In the official tutorial, the author also said that "WebMagic's design refers to the industry's best crawler scrapy", which explains the most important task of mastering scrapy or crawler engineers.
3. WebMagic's code is divided into two parts: Webmagic-core and Webmagic-extension
4. WebMagic is comprised of four components (Downloader, Pageprocessor, Scheduler, Pipeline). The spider is the core of the internal process, and the four components are its properties.
Spider is also the entrance of WebMagic operation, it encapsulates the crawler's creation, start, stop, multithreading and other functions.
5.public static void main(String[] args){
Spider.create(new GithubRepoPageProcessor()) //从https://github.com/code4craft开始抓
. Addurl (//settings Scheduler, using Redis to manage URL queues
. Setscheduler (new Redisscheduler ( "localhost")) //settings pipeline, the result is saved in JSON to the file
. Addpipeline ( New Jsonfilepipeline ( "d:\\data\\webmagic")) //open 5 threads simultaneously execution. Thread (5) //start crawler
. Run ();
}
6. Webmagic-selenium supports crawling of dynamic Web pages, Webmagic-saxon supports parsing of X-path and XSLT.
Java Crawler Framework WebMagic Learning (I.)