Today learned the Java Crawler, first to download JOUSP package, and then import, import process: First right-click the project: Build path->configure build path, and then click Add External JARS, for the guide package.
Study Reference Document: https://jsoup.org/
But there is a small problem: the use of Java Crawler crawled to know the problem, but only climbed to the top three, the general idea should still be correct. Do not know how to solve, if you have the idea of the big guy, hope advice:
ImportJava.io.BufferedWriter;ImportJava.io.File;ImportJava.io.FileWriter;Importjava.io.IOException;Importjava.util.ArrayList;Importjava.util.List;ImportOrg.jsoup.Jsoup;Importorg.jsoup.nodes.Document;Importorg.jsoup.nodes.Element;Importorg.jsoup.select.Elements;Importjava.io.IOException;ImportOrg.jsoup.Jsoup;Importorg.jsoup.nodes.Document;Importorg.jsoup.nodes.Element;Importorg.jsoup.select.Elements; Public classworm0{ Public Static voidMain (string[] args)throwsIOException {Document Document=jsoup.connect ("https://www.zhihu.com/"). get (); Elements Main=document.select (". Contentlayout-maincolumn "); Elements URL=main.select ("H2[class=contentitem-title]"). Select ("a"); System.out.println ("url" +URL); for(Element question:url) {//output href value, which is a link to each concern on the home pageString url=question.attr ("Abs:href"); //download question link to pageDocument document2=Jsoup.connect (URL). get (); //problemElements Title=document2.select (". Questionheader-title "); //Problem DescriptionElements detail=document2.select ("Span[class=richtext Ztext]"); //AnswerElements Answer=document2.select (". Richcontent-inner "); System.out.println ("\ n" + "link:" +URL+ "\ n" + "title:" +Title.text ()+ "\ n" + "Problem Description:" +Detail.text ()+ "\ n" + "Answer:" +Answer.text ()); } }}
Java using Jousp crawl to know the home page problem