JavaThe principle and realization of web crawler acquiring webpage source code
1. Web crawler is an automatic retrieval of web pages, it is a search engine from the World Wide Web page, is an important component of the search engine. The traditional crawler starts from the URL of one or several initial web pages, obtains the URL on the initial page , and extracts the new URL from the current page during the crawl of the Web page. into the queue until a certain stop condition is met for the system.
2. So how does the program get the Web page going? Look at the following figure: The customer service side first sends the Http request to the server side , then the server side returns the corresponding result or requests the timeout client to error itself.
server-side http request, is actually a request to the server's file. The following table is a list of some common http request the corresponding file. (Because the first column gives the host's URL information, the host typically converts the request to the Web site home address index.php index.jsp or index.html
HTTP request |
HTTP correspondence The file |
http://www.baidu.com |
http://www.baidu.com/index.php |
&nbs p;http://www.sina.com.cn |
http://www.sina.com.cn/index.html |
http://www.cnblogs.com |
http://www.cnblogs.com/index.html |
&N bsp;http://ac.jobdu.com |
http://ac.jobdu.com/index.php |
3.Java Implementation of the Web page source access steps:
(1) New URL object that represents the URL to be accessed. such as:url=new url ("http://www.sina.com.cn");
(2) Establish HTTP Connect, return Connection object URLConnection object. such as:urlconnection = (httpurlconnection) url.openconnection ();
(3) get the appropriate HTTP status Code. such as Responsecode=urlconnection.getresponsecode ();
(4) if HTTP Status code is $ , indicating success. gets the input stream object from the URLConnection object to obtain the requested Web page source code.
4.java Gets the source code of the Web page:
Import Java.io.BufferedReader;
Import Java.io.InputStreamReader;
Import java.net.HttpURLConnection;
Import Java.net.URL;
public class Webpagesource {
public static void Main (String args[]) {
URL url;
int responsecode;
HttpURLConnection URLConnection;
BufferedReader reader;
String Line;
try{
generate a URL object, to get the source code of the Web page address is:http://www.sina.com.cn
Url=new URL ("http://www.sina.com.cn");
Open URL
URLConnection = (httpurlconnection) url.openconnection ();
get the server response code
Responsecode=urlconnection.getresponsecode ();
if (responsecode==200) {
gets the input stream, which gets the content of the Web page
Reader=new BufferedReader (New InputStreamReader (Urlconnection.getinputstream (), "GBK"));
while ((Line=reader.readline ())!=null) {
System.out.println (line);
}
}
else{
System.out.println (" get the source of the Web page, the server response code is:"+responsecode);
}
}
catch (Exception e) {
System.out.println (" access to the source of the Web page , an exception:"+e);
}
}
}
The principle and realization of Java web crawler acquiring Web source code