I original, reproduced please indicate the source (http://blog.csdn.net/panjunbiao/article/details/16960029).
Apache Nutch 1.7 When crawling a website, an error message appears:
2013-11-25 15:23:37,793 info API. Httprobotrulesparser-couldn ' t get robots.txt for http://www.xxx.com/: Java.io.EOFException 2013-11-25 15:23:37,893 ERROR http. http-failed to get protocol output java.io.EOFException at Org.apache.nutch . Protocol.http.HttpResponse.readLine (httpresponse.java:427) at Org.apache.nutch.protocol.http.HttpResponse.parseStatusLine (httpresponse.java:319) at Org.apache.nutch.protocol.http.httpresponse.<init> (httpresponse.java:154) at Org.apache.nutch.protocol.http.Http.getResponse (http.java:64) at Org.apache.nutch.protocol.http.api.HttpBase.getProtocolOutput ( httpbase.java:140) at org.apache.nutch.fetcher.fetcher$ Fetcherthread.run (fetcher.java:703) 2013-11-25 15:23:37,903 info fetcher.Fetcher-fetch of http://www.xxxx.com/ failed with:java.io.eofexception |
That is, unable to get the robots.txt file, check the HTTP request there is no obvious problem, other sites can be crawled.
Catch the bag with Wireshark found that the crawl site's GET request received no response, suspicion is user-agent problem.
To test with curl, first use the user-agent that contains the Nutch keyword:
jamess-macbook-pro:~ james$ curl-a "friendly crawler/nutch-1.7"-V http://www.****.com/robots.txt * Adding handle:conn: 0x7fe33400fe00 * Adding handle:send:0 * Adding handle:recv:0 * curl_addhandletopipeline:length:1 *-Conn 0 (0x7fe3 3400fe00) send_pipe:1, recv_pipe:0 * About to connect () to www.****.com port (#0) * Trying 42.121.98.156 ... * Connected to Www.****.com (42.121.98.156) port (#0) > Get/robots.txt http/1.1 > user-agent:friendly crawler/ Nutch-1.7 > Host:www.****.com > Accept: */* > * Empty reply from server * Connection #0 to Host www.****.com l EFT Intact Curl: (Reply) Empty from server |
Re-use the user-agent that does not contain the Nutch keyword:
jamess-macbook-pro:~ james$ curl-a "Chrome"-v http://www.****.com/robots.txt * Adding handle:conn:0x7f82f180fe0 0 * Adding handle:send:0 * Adding handle:recv:0 * curl_addhandletopipeline:length:1 *-Conn 0 (0x7f82f180fe00) send _pipe:1, recv_pipe:0 * About to connect () to www.****.com port (#0) * Trying 42.121.98.156 ... * Connected to www. .com (42.121.98.156) port (#0) > Get/robots.txt http/1.1 > User-agent:chrome > host: www. . com > Accept: */* > < http/1.1 OK * Server nginx/1.5.6 is not blacklisted < server:nginx/1.5.6 < Date:mon, 08:51:13 GMT < Content-type:text/plain; Charset=utf-8 < content-length:60 < Last-modified:tue, Dec 01:29:47 GMT < connection:keep-alive < E Tag: "50bd520b-3c" < x-ua-compatible:ie=edge,chrome=1 < X-xss-protection:1; Mode=block < Accept-ranges:bytes < user-agent: * Disallow:/robot/trap Disallow:/page/6559999 * Connection #0 to host www. . Com left Intact |
Sure enough, the target site filters out requests for user-agent that contain nutch keywords, but where to modify the contents of this user-agent field. Search Apache contains http.agent.name source code in./plugin/lib-http/src/java/org/apache/nutch/protocol/http/api/httpbase.java:
//inherited Javadoc public void setconf (Configuration conf) { this.conf = Co nf this.proxyhost = Conf.get ("Http.proxy.host"); This.proxyport = Conf.getint ("Http.proxy.port", 8080); This.useproxy = (proxyhost! = null && proxyhost.length () > 0); this.timeout = Conf.getint ("Http.timeout", 10000); this.maxcontent = Conf.getint ("Http.content.limit", 64 * 1024); this.useragent = getagentstring (Conf.get ("Http.agent.name"), Conf.get (" Http.agent.version "), Conf . Get (" Http.agent.description "), Conf.get (" Http.agent.url "), Conf.get (" Http.agent.email ")); this.acceptlanguage = Conf.get ("Http.accept.language", acceptlanguage); &nbSp This.accept = Conf.get ("http.accept", accept); //backward-compatible default setting THIS.USEHTTP11 = Conf.getboolean ("Http.usehttp11", false); this.robots.setConf (conf); logconf (); } |
It can be found that http.agent.version is set, modified, problem solved.