Write in front
Recently saw Gecoo crawler tools, feel relatively easy to use, all write a demo test, crawl site
Http://zj.zjol.com.cn/home.html, the main crawl of news headlines and release time as a crawl test object. Crawling HTML nodes is very convenient by selecting nodes like the jquery selector, and Gecco code mainly uses annotations to implement URL matching, which looks more concise and beautiful.
Gecoo GitHub Address
Https://github.com/xtuhcy/gecco
Gecoo Author Blog
Http://my.oschina.net/u/2336761/blog?fromerr=ZuKKo3fH
Add maven Dependency
<Dependency> <groupId>Com.geccocrawler</groupId> <Artifactid>Gecco</Artifactid> <version>1.0.8</version></Dependency>
Writing Crawl list pages
1@Gecco (Matchurl = "Http://zj.zjol.com.cn/home.html?pageindex={pageindex}&pagesize={pagesize}", pipelines = " Zjnewslistpipelines ")2 Public classZjnewsgeccolistImplementsHtmlbean {3 @Request4 PrivateHttpRequest request;5 @RequestParameter6 Private intPageIndex;7 @RequestParameter8 Private intpageSize;9@HtmlField (Csspath = "#content > div > div > Div.con_index > Div.r.main_mod > div > ul > li > DL > dt > a ")Ten PrivateListNewList; One}
1@PipelineName ("Zjnewslistpipelines")2 Public classZjnewslistpipelinesImplementsPipeline<zjnewsgeccolist> {3 Public voidprocess (Zjnewsgeccolist zjnewsgeccolist) {4HttpRequest request=zjnewsgeccolist.getrequest ();5 for(Hrefbean bean:zjNewsGeccoList.getNewList ()) {6 //go to the Xiang Love page crawl7Schedulercontext.into (Request.subrequest ("http://zj.zjol.com.cn" +Bean.geturl ()));8 }9 intPage=zjnewsgeccolist.getpageindex (+1);TenString Nexturl = "http://zj.zjol.com.cn/home.html?pageIndex=" +page+ "&pagesize=100"; One //Crawl Next page A Schedulercontext.into (Request.subrequest (Nexturl)); - } -}
Write the crawl Love page
1@Gecco (Matchurl = "http://zj.zjol.com.cn/news/{code}.html", pipelines = "Zjnewsdetailpipeline")2 Public classZjnewsdetailImplementsHtmlbean {3 4 @Text5@HtmlField (Csspath = "#headline")6 PrivateString title;7 8 @Text9@HtmlField (Csspath = "#content > div > Div.news_con > Div.news-content > Div:nth-child (1) > div > P.go-l Eft.post-time.c-gray ")Ten PrivateString createtime; One}
1 @PipelineName ("Zjnewsdetailpipeline")2publicclassimplements pipeline<zjnewsdetail> {3public void process ( Zjnewsdetail zjnewsdetail) {4 System.out.println (zjnewsdetail.gettitle () + " " + Zjnewsdetail.getcreatetime ()); 5 }6 }
Start Main function
1 Public classMain {2 Public Static voidmain (String [] rags) {3 geccoengine.create ()4 //package path for the project5. Classpath ("Com.zhaochao.gecco.zj")6 //page address to start crawling7. Start ("http://zj.zjol.com.cn/home.html?pageIndex=1&pageSize=100")8 //Open several crawler threads9. Thread (10)Ten //the interval between the time a single crawler crawls a request One. Interval (10) A //using PC-side useragent -. Mobile (false) - //Start Running the . Run (); - } -}
Fetch results
Project Completion Code
Http://git.oschina.net/whzhaochao/geccoDemo
Grab the news demo using the lightweight Java Crawler Gecco tool