1. What is crawler?
CRAWLER4J is an open source Java Crawler Library that can be used to build multi-threaded web crawlers to crawl page content.
2. How do I get crawler?
CRAWLER4J's official address is here, and the current version is 4.1. If you use Maven, you can go through the following Pom way, such as direct download, click here.
3. How to use crawler?
the use of crawler4j is divided into two steps: one is to implement a reptile that inherits from Edu.uci.ics.crawler4j.crawler.WebCrawler, and the other is a reptile that is implemented through Crawcontroller invocation.
package com.favccxx.favsoft.favcrawler;import java.util.set;import java.util.regex.pattern; import org.slf4j.logger;import org.slf4j.loggerfactory;import edu.uci.ics.crawler4j.crawler.page;import edu.uci.ics.crawler4j.crawler.webcrawler;import Edu.uci.ics.crawler4j.parser.htmlparsedata;import edu.uci.ics.crawler4j.url.weburl;public class favwebcrawler extends webcrawler {private static final logger logger = loggerfactory.getlogger (webcrawler.class);p rivate final static pattern Filters = pattern.compile (". *" (\ \) ( Css|js|gif|jpg " + " |png|mp3|mp3|zip|gz)); @Overridepublic boolean shouldvisit (page Referringpage, weburl url) {string href = url.geturl (). ToLowerCase (); return ! Filters.matcher (HREF). Matches () && href.startswith ("http://www.oschina.net/");} This method is called when/** * handles a crawled page */@Overridepublic void visit (page page) {int docid = page.getweburl ( ). Getdocid (); String url = page.getweburl (). GetURL (); String domain = page.getweburl (). GetDomain (); String path = page.getweburl (). GetPath (); String subdomain = page.getweburl (). Getsubdomain (); String parenturl = page.getweburl (). Getparenturl (); String anchor = page.getweburl (). Getanchor (); Logger.debug ("docid: {}", docid); Logger.info ("url: {}", url); Logger.debug ("domain: ' {} '", domain); Logger.debug ("Sub-domain: ' {} ', subdomain), Logger.debug ("path: ' {} '", path); Logger.debug ("parent page: { } ", parenturl); Logger.debug (" anchor text: {} ", anchor);if (Page.getparsedata () Instanceof htmlparsedata) {HtmlParseData htmlParseData = (htmlparsedata) Page.getparsedata (); String text&nbsP;= htmlparsedata.gettext (); String html = htmlparsedata.gethtml (); Set<weburl> links = htmlparsedata.getoutgoingurls (); Logger.debug ("Text length: " + text.length ())" Logger.debug ("html length: " + html.length ()); Logger.debug (" number of outgoing links: " + links.size ());}}}
package com.favccxx.favsoft.favcrawler;import java.util.set;import java.util.regex.pattern; import org.slf4j.logger;import org.slf4j.loggerfactory;import Edu.uci.ics.crawler4j.crawler.crawlconfig;import edu.uci.ics.crawler4j.crawler.crawlcontroller;import edu.uci.ics.crawler4j.crawler.Page;import edu.uci.ics.crawler4j.crawler.WebCrawler;import edu.uci.ics.crawler4j.fetcher.pagefetcher;import edu.uci.ics.crawler4j.parser.htmlparsedata;import edu.uci.ics.crawler4j.robotstxt.robotstxtconfig;import edu.uci.ics.crawler4j.robotstxt.robotstxtserver; Import edu.uci.ics.crawler4j.url.weburl;public class mycrawler extends webcrawler {private static final logger logger = loggerfactory.getlogger ( Webcrawler.class);p rivate final static pattern filters = pattern.compile (". * (\ \) ( Css|js|gif|jpg " + " |png|mp3|mp3|zip|gz)); @OverriDe public boolean shouldvisit (Page referringpage, weburl url) { string href = url.geturl (). toLowerCase (); return ! Filters.matcher (HREF). Matches () && href.startswith ("http://www.oschina.net/"); } /** * This function is called when a page is fetched and ready * to be processed by your program. */ @Override public void visit (page page) { &nbSp; int docid = page.getweburl (). Getdocid (); string url = page.getweburl (). GetURL (); string domain = Page.getweburl (). GetDomain (); string path = page.getweburl (). GetPath (); string subdomain = page.getweburl (). GetSubDomain (); string parenturl = page.getweburl (). GETPARENTURL (); string anchor = page.getweburl (). Getanchor ();// Page.getweburl (). Gettag () system.out.println ("* * * * * ");// // System.out.println ("docid: {}" + docid);// system.out.println ("URL : {} "+ url);// system.out.println ("domain: ' {} '" + domain);// system.out.println (" sub-domain: ' {} ' + subdomain);// system.out.println ("path: ' {} '" + path); / system.out.println ("parent page: {}" + parenturl);// System.out.println ( "anchor text: {}" + anchor); logger.debug ("Docid: {}", DOCID); logger.info ("url: {}", url); Logger.debug ("domain: ' {} '", domain); logger.debug ("Sub-domain: ' { } ' ", subdomain); logger.debug (" path: ' {} ' ", path); logger.debug ("parent page: {}", parenturl); Logger.debug ("anchor text: {}", anchor); // string url = page.getweburl (). GetURL (); system.out.println ("url: " + url); if (Page.getparsedata () instanceof htmlparsedata) { htmlparsedata htmlparsedata = (Htmlparsedata) page.getparsedata (); string text = htmlparsedata.gettext (); string html = htmlparsedata.gethtml (); Set<WebURL> links = Htmlparsedata.getoutgoingurls (); &nBsp; system.out.println ("--------------------------");// system.out.println (text); system.out.println ("--------------------------"); system.out.println ("Text length: " + text.length ()); system.out.println ("html length: " + html.length ()); system.out.println ("number of outgoing links: " + links.size ()); } } public static void main ( String[] args) throws exception{ string crawlstoragefolder = "/data/crawl/root" ; int numberofcrawlers = 7; crawlconfig config = new crawlconfig (); config.setcrawlstoragefolder (CrawlStorageFolder); /* * Instantiate the controller for this crawl. */ pagefetcher pagefetcher = new pagefetcher (config); Robotstxtconfig robotstxtconfig = new robotstxtconfig (); robotstxtserver&nbsP;robotstxtserver = new robotstxtserver (Robotstxtconfig, pagefetcher); crawlcontroller controller = new crawlcontroller ( Config, pagefetcher, robotstxtserver); /* * for each crawl, you need to add some seed urls. These are the first * URLs that are fetched and then the crawler starts following links * which are found in these pages */ controller.addseed ("HTTP/ www.oschina.net/");// &nbsP; controller.addseed ("http://www.ics.uci.edu/~welling/");// controller.addseed ("http://www.ics.uci.edu/"); /* * start the crawl. this is a blocking operation, meaning that your code * will reach the line after this only when crawling is finished. */ controller.start ( mycrawler.class, numberofcrawlers); }}
4. Crawler Common Configuration
The configuration files for crawler4j are located in Edu.uci.ics.crawler4j.crawler.CrawlConfig, and the configuration properties are described in detail below.
Crawlstoragefolder: The location of temporary storage of crawled files, equivalent to the file broker.
Resumablecrawling: Whether to re-crawl the switch of the last exception stop/corrupt file, does not turn on by default. If the switch is turned on, it will undoubtedly reduce the efficiency of the crawl.
Maxdepthofcrawling: Maximum depth to crawl. The default is-1, which is the infinite depth.
Maxpagestofetch: The maximum number of pages crawled. The default is-1, which is infinite fetching.
Useragentstring: Crawl the user agent for the Web server. The default is "crawler4j (http://code.google.com/p/crawler4j/)".
Politenessdelay: (two requests between the same host) the number of milliseconds to delay. The default is 200.
Includehttpspages: Whether to include HTTPS pages. Included by default.
Includebinarycontentincrawling: Whether to include binary files, such as Image,audio. The default is no crawl.
Maxconnectionsperhost: The maximum number of connections per host, default is 100.
Maxtotalconnections: Total number of connections for the host, default is 100.
Sockettimeout:socket timeout milliseconds, default is 20000.
ConnectionTimeout: The number of connections timeout milliseconds, default is 30000.
Maxoutgoinglinkstofollow: The maximum number of external links per page, default is 5000.
Maxdownloadsize: The maximum download capacity per page, default 1048576kb (1024M), the portion of the excess will not be downloaded.
Followredirects: Whether to crawl the redirected page, the default crawl.
ProxyHost: Proxy host address, used only when using proxy to surf the internet.
ProxyPort: Proxy port number.
Proxyusername: Proxy user name.
Proxypassword: proxy password.
Authinfos: Authorized user information. |
|
This article is from the "This person's IT World" blog, be sure to keep this source http://favccxx.blog.51cto.com/2890523/1691079
What do you know about crawler4j crawling?