Hadoop Local (Standalone) mode (stand-alone version) is installed purely for practiced hand, completed the stand-alone version, the following training pseudo-distributed Mode (pseudo-distributed version) installed. Pseudo-distribution simulates the full functionality of Hadoop on a single physical machine. including SSH access, HDFs format, mapreduce execution, yarn resource management, pseudo-distributed installation is a stand-alone installation of the continuation, part of the content depends on the installation of a single version.
1, first confirm that there is no SSH installed on the redhat6.4.
[email protected] ~]# Rpm-qa|grep SS h Openssh-askpass-5.3p1-81.el6.x86_64 Trilead-ssh2-213-6.2.el6.noarch Openssh-clients-5.3p1-81.el6.x86_64 Ksshaskpass-0.5.1-4.1.el6.x86_64 Openssh-server-5.3p1-81.el6.x86_64 Libssh2-1.2.2-7.el6_2.3.x86_64 Openssh-5.3p1-81.el6.x86_64 |
2. Confirm that there is no rsync installed
[Email protected] ~]# Rpm-qa|grep rsync Rsync-3.0.6-9.el6.x86_64 |
3, execute the command, test whether SSH can be accessed by no password
[[email protected] ~]$ ssh localhost The authenticity of host ' localhost ' (:: 1) ' can ' t be established. RSA key fingerprint is 05:9e:ac:46:24:aa:c1:45:be:f6:55:83:10:6d:45:6d.
Is you sure want to continue connecting (yes/no)?
|
Note: If you need to enter a password each time, the public key and private key are not configured.
4, configure SSH, generate public key, private key
[Email protected] ~]$ ssh-keygen-t dsa-p '-F ~/.SSH/ID_DSA Generating Public/private DSA key pair. Your identification has been saved IN/HOME/HADOOP/.SSH/ID_DSA. Your public key has been saved in/home/hadoop/.ssh/id_dsa.pub. The key fingerprint is: d4:fc:32:6f:5c:d6:5a:47:89:8a:9d:79:d1:b5:51:14 [email protected] The key ' s Randomart image is: +--[DSA 1024x768]----+ | e*| | o O =| | . o O +.| | . o.+. O | | s.o=. o +| | =.O o.| | + . | | . | | | +-----------------+
Execute the following command to merge the public key.
[email protected] ~]$ cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys Execute the following command to modify the public key file mode.
[email protected]. ssh]$ chmod 644 Authorized_keys
Here, the official documentation is based on Ubuntu instructions that require the execution of chmod 0660 ~/.ssh/authorized_keys, but only chmod 644 Authorized_keys must be performed on redhat6.4. Otherwise, there will be an error. |
5. Set the Java_home in the configuration file
[Email protected] ~]$ VI hadoop-2.7.2/etc/hadoop/hadoop-env.sh
# set to the root of your Java installation Export java_home=/usr/java/jdk1.8.0_92
|
6, Configuration Core-site.xml
VI Hadoop-2.7.2/etc/hadoop/core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration>
|
7, Configuration Hdfs-site.xml
VI Hadoop-2.7.2/etc/hadoop/hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
|
8. Formatted Namenode
[email protected] hadoop-2.7.2]$ Bin/hdfs Namenode-format 16/03/12 19:21:50 INFO Namenode. Namenode:startup_msg: /************************************************************ Startup_msg:starting NameNode Startup_msg:host = localhost/127.0.0.1 Startup_msg:args = [-format]
Startup_msg:version = 2.7.2 。。。 。。。 16/03/12 19:21:55 INFO Common. Storage:storage Directory/tmp/hadoop-hadoop/dfs/name has been successfully formatted. 16/03/12 19:21:56 INFO Namenode. Nnstorageretentionmanager:going to retain 1 images with Txid >= 0 16/03/12 19:21:56 INFO util. Exitutil:exiting with status 0 16/03/12 19:21:56 INFO Namenode. Namenode:shutdown_msg: /************************************************************ Shutdown_msg:shutting down NameNode at localhost/127.0.0.1 ************************************************************/ |
9. Start HDFs
[[email protected] sbin]$ start-dfs.sh 16/03/12 20:04:37 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where APPLICABLE starting namenodes on [localhost] localhost:starting namenode, logging to/home/hadoop/hadoop-2.7.2/logs/ Hadoop-hadoop-namenode-localhost.localdomain.out localhost:starting Datanode, logging to/home/hadoop/ Hadoop-2.7.2/logs/hadoop-hadoop-datanode-localhost.localdomain.out Starting secondary namenodes [0.0.0.0] 0.0.0.0:starting Secondarynamenode, logging to/home/hadoop/hadoop-2.7.2/logs/ Hadoop-hadoop-secondarynamenode-localhost.localdomain.out 16/03/12 20:04:57 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where APPLICABLE
|
10. Confirm that you can successfully access the Web page of HDFs
650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M01/83/3D/wKiom1dtT6LTfuLaAACZ7Ishbc8521.png-wh_500x0-wm_3 -wmp_4-s_254652364.png "title=" Http-50070-part.png "alt=" Wkiom1dtt6ltfulaaacz7ishbc8521.png-wh_50 "/>
11. Import the local file into HDFs to test the MapReduce demo program
[email protected] sbin]$ HDFs dfs-mkdir/user 16/03/12 20:46:20 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where applicable
[email protected] hadoop-2.7.2]$ HDFs dfs-put./etc/hadoop//user
[email protected] hadoop-2.7.2]$ Hadoop jar ~/hadoop-2.7.2/share/hadoop/mapreduce/ Hadoop-mapreduce-examples-2.7.2.jar grep/user/hadoop output ' de[a-z. + ' NOTE: The following statement is on the HDFs and looks directly at the output [email protected] sbin]$ HDFs dfs-cat/user/hadoop/output/* NOTE: The following statement copies the output from HDFs to a local folder
[[email protected] output]$ HDFs dfs-get/user/hadoop/output output Description: View content under a local folder
[email protected] ~]$ cat output/* Description -Der . Default 。。。 。。。
|
Note that the above statement creates a "/user" directory if it fails, possibly because the directory has protected mode enabled, you need to execute the following command first: [[email protected] sbin]$ Hadoop dfsadmin-safemode leave
12. Stop HDFs
[Email protected] sbin]$ stop-dfs.sh 16/03/12 20:09:23 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where applicable stopping namenodes on [localhost] Localhost:stopping Namenode Localhost:stopping Datanode Stopping secondary namenodes [0.0.0.0] 0.0.0.0:stopping Secondarynamenode 16/03/12 20:09:46 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where applicable
|
13. Enable yarn on a single node, configure Mapred-site.xml
In the hadoop2.7.2 release, the Mapred-site.xml file was not found, so copy it directly from the template.
[email protected] sbin]$ CP mapred-site.xml.template Mapred-site.xml VI Etc/hadoop/mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> |
14, Configuration Yarn-site.xml
VI Etc/hadoop/yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> |
15. Start yarn
[Email protected] sbin]$ start-yarn.sh Starting Yarn Daemons Starting ResourceManager, logging to/home/hadoop/hadoop-2.7.2/logs/ Yarn-hadoop-resourcemanager-localhost.localdomain.out Localhost:starting NodeManager, logging to/home/hadoop/hadoop-2.7.2/logs/ Yarn-hadoop-nodemanager-localhost.localdomain.out
|
16. Access Yarn's Web page
650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M00/83/3D/wKiom1dtT3_TiVnYAAEgyCs9T-M941.png-wh_500x0-wm_3 -wmp_4-s_3383490219.png "title=" Http-8088-part.png "alt=" Wkiom1dtt3_tivnyaaegycs9t-m941.png-wh_50 "/>
17. Stop Yarn
[Email protected] sbin]$ stop-yarn.sh stopping Yarn daemons Stopping ResourceManager Localhost:stopping NodeManager No proxyserver to stop |
The above is the installation of pseudo-distributed Hadoop, the entire process in accordance with the official Hadoop document execution, in the implementation process, if encountered other problems, mostly due to the operating system, such as the installation of system software, network configuration and so on.
This article from "Shen Jinqun" blog, declined reprint!
Big data: From Getting Started to XX (iv)