I. Preface
From a website, I saw a code "webpage capture", which was a bit interesting, but did not provide the source code. So I wanted to write it myself. In fact, the code is relatively simple.
Ii. Code
<% @ Page contenttype = "text/html; charset = gb2312" %>
<%
String scurrentline;
String stotalstring;
Scurrentline = "";
Stotalstring = "";
Java. Io. inputstream l_urlstream;
Java.net. url l_url = new java.net. URL ("http://www.163.net /");
Java.net. httpurlconnection l_connection = (java.net. httpurlconnection) l_url.openconnection ();
Rochelle connection.connect ();
Rochelle urlstream = Rochelle connection.getinputstream ();
Java. Io. bufferedreader l_reader = new java. Io. bufferedreader (New java. Io. inputstreamreader (l_urlstream ));
While (scurrentline = l_reader.readline ())! = NULL)
{
Stotalstring + = scurrentline;
}
Out. println (stotalstring );
%>
Iii. Postscript
Although the Code is relatively simple, I think the "web crawler" function can be implemented based on this, for example, finding an href connection from the page, then getting the connection, and then "capturing ", do not stop (of course, you can limit the number of layers). In this way, you can implement the "Webpage Search" function.