As the latest version of Hadoop is downloaded, the various assorted on the web are not applicable. Fortunately, the official website is clear enough. If anyone read this article, the biggest advice is crossing net .
Official website 2.6.0 Installation Tutorial: http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/SingleCluster.html
HDFS directive: http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/FileSystemShell.html
Note: 2.6.0 default is 64-bit system, if the use of 32-bit machine will always appear the following warning: This can be ignored, will not affect the results
Java HotSpot (TM) Client VM warning:you has loaded library/home/software/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0 Which might has disabled stack guard. The VM would try to fix the stack guard now.
It's highly recommended that's the library with ' execstack-c <libfile> ', or link it with '-Z noexecstack '.
14/12/04 21:52:59 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where applicable
1. Installing the JDK I wrote it in another article. http://www.cnblogs.com/dplearning/p/4140334.html
2. SSH password-free login http://www.cnblogs.com/dplearning/p/4140352.html
3. Configuration
Etc/hadoop/core-site.xml:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs:// Localhost:9000</value> </property></configuration>
Etc/hadoop/hdfs-site.xml:
<configuration> <property> <name>dfs.replication</name> <value>1< /value> </property></configuration>
4. $ bin/hdfs namenode-format//format
5. $ sbin/start-dfs.sh//Open process
If successful, the JPS viewing process should be
If there is no Datanode check the log
If there is an error
Java.io.IOException:Incompatible clusterids In/tmp/hadoop-root/dfs/data:namenode Clusterid = cid-2b67ec7b-5edc-4911-bb22-1bb8092a7613; Datanode Clusterid = cid-aa4ac802-100d-4d29-813d-c6b92dd78f02
Then, should be the/tmp/hadoop-root folder and the previous remnants of the file, all empty after the format, restart the program should be good.
Running Examples:
1. First create a folder on HDFs Bin/hdfs dfs-mkdir-p/user/kzy/input
Bin/hdfs dfs-mkdir-p/user/kzy/output
2. Upload some files first: Bin/hdfs dfs-put etc/hadoop//user/kzy/input upload the Etc/hadoop file to the/user/kzy/input of HDFs
3. Execution instructions
Bin/hadoop jar Share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep/user/kzy/input/hadoop/user/kzy/ Output/o ' Dfs[a-z. +
Note that/user/kzy/output/o is a folder that has not been created and has a warning if you use an existing folder
4. View Results
Bin/hdfs dfs-cat/user/kzy/output/o/*
Then run some wordcount, the official website in http://hadoop.apache.org/docs/r2.6.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/ mapreducetutorial.html#example:_wordcount_v1.0
Run
Bin/hadoop jar Share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount/user/kzy/input/hadoop/user/ Kzy/output/wordcount
Use
Bin/hdfs dfs-cat/user/kzy/output/wordcount/* See results
"hadoop2.6.0" Installation + example run