Standalone installation is mainly used for Program Logic debugging. The installation steps are basically distributed, including environment variables, main Hadoop configuration files, and SSH configuration. The main difference lies in the configuration file: slaves configuration needs to be modified. In addition, if dfs. replication is greater than 1 in Distributed installation, it needs to be modified to 1 because there is only one datanode.
For distributed installation, see:
Http://acooly.iteye.com/blog/1179828
During standalone installation, one machine is used, that is, namenode and JobTracker are also datanode and TaskTracker, and of course SecondaryNameNode.
The main configuration file core-site.xml, hdfs-site.xml, mapred-site.xml, masters is fully the same as the distribution tree installation configuration, if the number of copies of the hdfs-site.xml in the distributed installation configuration scheme is defined greater than 1, it is modified to 1.
Copy codeThe Code is as follows: <property>
<Name> dfs. replication </name>
<Value> 1 </value>
</Property>
The main difference lies in the slaves configuration. In distributed installation, multiple other machines are used as datanode. In standalone mode, the local machine is datanode, so the slaves configuration file is changed to the domain name of the local machine. For example, if the name of the local machine is hadoop11, then:
[Hadoop @ hadoop11 ~] $ Cat hadoop/conf/slaves
Hadoop11
After the configuration is complete, start:Copy codeThe Code is as follows: $ start-all.sh
$ Jps
Jps 15556
15111 JobTracker
15258 TaskTracker
15014 SecondaryNameNode
14861 DataNode
14712 NameNode
Run DEMO
$ Echo word1 word2 word2 word3 word3> words
$ Cat words
Word1 word2 word2 word3 word3 word3
$ Hadoop dfsadmin-safemode leave
$ Hadoop fs-copyFromLocal words/single/input/words
$ Hadoop fs-cat/single/input/words
12/02/17 19:47:44 INFO security. Groups: Group mapping impl = org. apache. hadoop. security. ShellBasedUnixGroupsMapping; cacheTimeout = 300000
Word1 word2 word2 word3 word3 word3
$ Hadoop jar hadoop-0.21.0/hadoop-mapred-examples-0.21.0.jar wordcount/single/input/single/output
......
$ Hadoop fs-ls/single/output
......
-Rw-r -- 1 hadoop supergroup 0 2012-02-17 19:50/single/output/_ SUCCESS
-Rw-r -- 1 hadoop supergroup 24 2012-02-17 19: 50/single/output/part-r-00000
$ Hadoop fs-cat/single/output/part-r-00000
......
Word1 1
Word2 2
Word3 3