Heritrix has 5 chains, online has said in the extractor chain to do processing, the chain is the extraction chain, can be responsible for parsing the content of the HTML page, and then further screening. But now I just want to filter out the HTML, htm, shtml, xshtml and so on by judging the suffix name. So in the extractor to do the processing is a bit sledgehammer small use meaning, so I in the postprocessor chain to do processing. Detailed descriptions are as follows:
Fronitierscheduler is a kind of postprocessor, its role is to add extractor in the analysis of the link to the Froniter, for the next process (write file processing, etc.).
Specific methods:
1. Locate the Frontierscheduler.java file under the Org.archive.crawler.postprocessor package
2. Find the protected void schedule (Candidateuri Cauri) method for the Frontierscheduler class
3. My rewrite is as follows:
<span style= "FONT-SIZE:14PX;" > protected void Schedule (Candidateuri cauri) { //Cauri converted to string format string url = cauri.tostring (); Print it out to see System.out.println ("------" + URL); Remove URLs that end with a specific suffix if (url.endswith (". jpeg") | | Url.endswith (". jpg") | | Url.endswith (". gif") | | Url.endswith (". css") | | Url.endswith (". Doc") | | Url.endswith (". zip") | | Url.endswith (". png") | | Url.endswith (". js") | | Url.endswith (". pdf") | | Url.endswith (". xls") | | Url.endswith (". rar") | | Url.endswith (". exe") | | Url.endswith (". txt")) { return; } Add non-excluded files to the next processing (write to local disk processing, etc.) Getcontroller (). Getfrontier (). Schedule (Cauri); } </span>
Heritrix crawl only specific pages such as HTML, HTM, etc.