An efficient and Agile Java Crawler Framework Seimicrawler Example

Source: Internet
Author: User
Tags xpath

Seimicrawler is a powerful, efficient and agile support for the Distributed Crawler development framework, hoping to minimize the threshold for novice developers to develop a high availability and performance-rich crawler system, and to improve the development efficiency of the development crawler system. In the Seimicrawler world, the vast majority of people just care to write crawling business logic is enough, the rest of the Seimi to help you fix it. Design Thinking Seimicrawler is inspired by Python's reptile framework Scrapy, and incorporates the features of the Java language itself and spring, and hopes to make it easier and more common to parse HTML in the country using more efficient XPath, So seimicrawler default HTML parser is Jsoupxpath, the default parsing extract HTML data work is done using XPath (of course, data processing can also choose other parsers).

Fundamentals of principle Examples

Cluster principle

Quick Start

Add Maven dependencies (synced to the Central Maven library):

<dependency>    <groupId>cn.wanghaomiao</groupId>    <artifactid>seimicrawler</ Artifactid>    <version>0.1.0</version></dependency>

  

Add a crawlers crawler rule under the package, for example:

@Crawler (name = "Basic") public class Basic extends Baseseimicrawler {    @Override public    string[] Starturls () {
   return new string[]{"http://www.cnblogs.com/"};    @Override public    void Start (Response Response) {        jxdocument doc = response.document ();        try {            list<object> urls = Doc.sel ("//a[@class = ' Titlelnk ']/@href");            Logger.info ("{}", urls.size ());            for (Object s:urls) {                push (new Request (S.tostring (), "GetTitle")),            }        } catch (Exception e) {            E.printstacktrace ();        }    }    public void GetTitle (Response Response) {        jxdocument doc = response.document ();        try {            logger.info ("url:{} {}", Response.geturl (), Doc.sel ("//h1[@class = ' posttitle ']/a/text () |//a[@id = ' Cb_post _title_url ']/text () "));            Do something        } catch (Exception e) {            e.printstacktrace ();}}    }

  

Then add the start main function under just one package and start Seimicrawler:

public class Boot {public    static void Main (string[] args) {        Seimi s = new Seimi ();        S.start ("Basic");}    }

  

The above is a simple crawler system development process, it is easy to get started. If you are interested in in-depth understanding, you can go to Seimicrawler's official homepage to see, there are more detailed documentation

An efficient and Agile Java Crawler Framework Seimicrawler Example

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.