Recently, I followed a video to learn how to crawl the 36kr website. It was quite simple. Using jsoup, I quickly got the text on the page. Who knows that when I analyze the web page to obtain the data I need, I can only get one root tag, and there is nothing in it. The first crawler encountered such a problem, And suddenly forced. After asking a few people, I understand that this is because the page data I want to capture is implemented using react, that is, it is rendered through the Javascript library. At this point, I basically know the reason and how to capture it.
Jsoup does not support parsing pages dynamically rendered by JavaScript, SelectHtmlunit.
First download the jar package on the official website (htmlunit.
Let's take a look at the parsing process of htmlunit:
Final WebClient = new WebClient (); WebClient. getoptions (). setcssenabled (false); // disable csswebclient. getoptions (). setjavascriptenabled (true); // This parameter must be set to true. If it is set to false, final htmlpage = WebClient is still not obtained. getpage ("https://36kr.com/"); htmldivision htmldiv = page. queryselector ("# app"); // obtain the first divsystem. out. println (htmldiv. asxml (); WebClient. close ();
When setjavascriptenabled (true) is set to true, some warning information appears during running.
Crawler's first step