It took me a day to finish Nutch (Nutch 0.8). Today, finally in the classmate tips to get it done. Sum up this disgusting thing, lest other comrades waste precious lives.
Note that you encounter problems must be a configuration problem, can not be the source program problem, otherwise Nutch will certainly not run. Normally running Nutch on the console is no problem. As for the specific operation crawl Reference: Nutch 0.8 Tutorial. The key configuration has the nutch-site.xml below the Conf, Hadoop-site.xml. The settings inside the Nutch-site.xml and had-oop-site.xml cover the values inside Nutch-default.xml and Hadoop-default.xml respectively.
1, the first reference to the official website of the import operation (if not very familiar words must refer to, lest waste of time), see: Runnutchineclipse, note that when imported in this manner will add all the jars to the classpath. Personally, I think Nutch-0.8.1.jar needs to be removed, but this is all the source files that have been distributed as jar packs.
2, delete the Src/test source code below the Nutch-site.xml. Otherwise, you will load multiple nutch-site.xml at run time, and you won't change all nutch-site.xml, God knows which one it is loaded. Nutch's developers are also very funny, deliberately in test to decentralize this file, let everyone make mistakes. But waste a day of time Ah, of course, technology is not pass, incredibly still did not find this error.
3, according to the official document run crawl, if the operation is still no result, that is to add a http.agent.name node under the Nutch-site.xml configuration:
<property>
<name>http.agent.name</name>
<value>test</value>
</property>
In fact, this information is in the tutorail inside said.
Following the three steps above, there is no problem.
I also encountered a problem is that I am just a normal user on a Linux server, so when I run the nutch there is a mistake: "Nothing/tmp/... Permission actions for the folder. The easiest way to do this is to say to the administrator that elevated privileges are root. If that doesn't work, there's a better way: Modify the Hadoop-site. How to modify it? First in the Hadoop-0.4.0-patched.jar can find Hadoop-default.xml, open the file, and then the corresponding set into/tmp/... It's worth copying all the attributes and pasting them into the hadoop-site.xml, which sets the value to the folder you have permission for. This method is supposed to be possible, if still not, then directly to the hadoop-default.xml inside the value of the correction, the specific method is to extract the file (not?) Directly with the decompression manager is good, and then modify the corresponding value, and then add in, you can overwrite the original.
This is the problem, remember it must be a configuration problem. If there is a problem during the run, first look at which configuration files are loaded, if several of the same name of the configuration file loaded, it is best to remove the extra, because you can not determine the class loader in the end loaded that configuration.