There are get,post two ways to request HTTP. Here, use the Get method to request "https://www.baidu.com".
Introduction of Appache httpclient bag, http://hc.apache.org/httpcomponents-client-4.5.x/download.html
Post Code First
package test;import java.io.ioexception;import org.apache.http.httpentity;import org.apache.http.httpresponse;import org.apache.http.client.clientprotocolexception;import Org.apache.http.client.methods.httpget;import org.apache.http.impl.client.closeablehttpclient;import org.apache.http.impl.client.httpclients;import org.apache.http.util.entityutils;public class test1 {public static void main (String[] args) {// Create the Httpcilent client Closeablehttpclient httpclient = httpclients.createdefault ();// Create a Get method, The target URL is Baidu httpget hg = new httpget ("http://baidu.com");// print the acquired HTML content Httpgetmethod ( HTTPCLIENT, HG);// release hg.releaseconnection ();} Public static void httpgetmethod (CLOSEABLEHTTPCLIENT HTTPCLIENT, HTTPGET HG) {try {//Execute Request Httpresponse resp = httpclient.execute (Hg);//Get html of request result Entity Httpentity entIty = resp.getentity ();// converts an entity to string  using the Entityutils tostring method, encoded as gbkstring entitstring = entityutils.tostring (entity, "GBK"); SYSTEM.OUT.PRINTLN ("Get page content: \ n" + entitstring);} catch (clientprotocolexception e) {e.printstacktrace ();} catch (ioexception e) {e.printstacktrace ();}}
This article is from the "Love Technology, Love life" blog, please be sure to keep this source http://youmiao.blog.51cto.com/6833914/1727852
Get method of JAVA crawl Web page