Single-machine installation is mainly used for program logic debugging. Installation steps are basic to distributed installation, including environment variables, main hadoop configuration files, SSH configuration, etc. The main difference is that the configuration file: The slaves configuration needs to be modified, and if the dfs.replication is greater than 1 in a distributed installation, it needs to be modified to 1 because there are only 1 datanode.
For distributed installation Please refer to:
http://acooly.iteye.com/blog/1179828
Single installation, the use of a machine, that is, do Namenode and Jobtracker is also Datanode and Tasktracker, of course, is also secondarynamenode.
The primary profile Core-site.xml,hdfs-site.xml,mapred-site.xml,masters is completely installed with the distribution tree and is modified to 1 if the number of replicas Hdfs-site.xml in the Distributed installation configuration scheme is greater than 1.
Copy Code code as follows:
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
The main difference lies in the configuration of slaves, distributed installation is a number of other machines as Datanode, stand-alone mode This machine is datanode, so modify slaves configuration file for the local domain name. For example: The machine name is HADOOP11, then:
[Hadoop@hadoop11 ~]$ Cat Hadoop/conf/slaves
Hadoop11
When the configuration is complete, start:
Copy Code code as follows:
$ start-all.sh
$ JPS
15556 Jps
15111 Jobtracker
15258 Tasktracker
15014 Secondarynamenode
14861 Datanode
14712 Namenode
Run Demo
$ echo word1 word2 word2 word3 word3 word3 > words
$ cat Words
Word1 word2 word2 word3 word3 word3
$ Hadoop Dfsadmin-safemode Leave
$ Hadoop fs-copyfromlocal words/single/input/words
$ Hadoop fs-cat/single/input/words
12/02/17 19:47:44 INFO Security. Groups:group mapping impl=org.apache.hadoop.security.shellbasedunixgroupsmapping; cachetimeout=300000
Word1 word2 word2 word3 word3 word3
$ Hadoop jar Hadoop-0.21.0/hadoop-mapred-examples-0.21.0.jar Wordcount/single/input/single/output
......
$ Hadoop Fs-ls/single/output
......
-rw-r--r--1 hadoop supergroup 0 2012-02-17 19:50/single/output/_success
-rw-r--r--1 Hadoop supergroup 2012-02-17 19:50/single/output/part-r-00000
$ Hadoop fs-cat/single/output/part-r-00000
......
Word1 1
Word2 2
Word3 3