Installing the JDK
1 Yum Install java-1.7. 0-openjdk*3 Check Installation: java-version
Create Hadoop users, set up Hadoop users so that they can password-free ssh to localhost
1 su - hadoop 2ssh-keygen" -f ~/. SSH/id_dsa 3cat ~/. SSH/id_dsa.pub>> ~/. ssh/authorized_keys 4 5 cd/home/hadoop/. SSH 6chmod Authorized_keys
Note The permissions issue here, ensure that the. SSH directory permission is 700,authorized_keys to 600
Verify:
1 [[email protected]. SSHssh localhost 2login:55 -
Unzip Hadoop, install on/opt/hadoop
1 tar -xzvf hadoop-2.6. 0. Tar . GZ 2 MV -i/home/erik/hadoop-2.6. 0 /opt/hadoop 3chown -R hadoop/opt/hadoop
The files to be modified are hadoop-env.sh, Core-site.xml, Hdfs-site.xml, Yarn-site.xml, mapred-site.xml several files.
1 Cd/usr/opt/hadoop/etc/hadoop
Set the Java environment variable in hadoop-env.sh and change it so that Java_home seems to be ineffective.
1
Core-site.xml
1 <configuration> 2 <property> 3 <name>hadoop.tmp. Dir</name> 4 <value>/opt/hadoop/tmp</value> 5 </ Property> 6 <property> 7 <name>fs.default.name</ name> 8 <value>localhost:9000</value> 9 </property> Ten
Hdfs.xml
1<configuration>2<property>3<name>dfs.replication</name>4<value>1</value>5</property>6<property>7<name>dfs.namenode.name.dir</name>8<value>/opt/hadoop/dfs/name</value>9</property>Ten<property> One<name>dfs.datanode.data.dir</name> A<value>/opt/hadoop/dfs/data</value> -</property> -<property> the<name>dfs.permissions</name> -<value>false</value> -</property> -</configuration>
Yarn-site.xml
1 <configuration> 2 <property> 3 <name>mapreduce.framework.name</ Name> 4 <value>yarn</value> 5 </property> 6 7 < Property> 8 <name>yarn.nodemanager.aux-services</name> 9 <value>mapreduce _shuffle</value> </property> </configuration>
Mapred-site.xml
1 <configuration>2 <property>3 <name>mapred.job.tracker</name> 4 <value>localhost:9001</value>5 </property>6 </ Configuration>
Configure environment variables, modify/etc/profile, write on the last side. Configure to restart!!!
1Export java_home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.95. x86_642Export Jre_home= $JAVA _home/JRE3Export classpath=.: $JAVA _home/lib/dt.jar: $JAVA _home/lib/tools.jar: $JAVA _home/bin4Export hadoop_install=/opt/Hadoop5Export path=${hadoop_install}/bin:${hadoop_install}/Sbin${path}6Export Hadoop_mapred_home=${hadoop_install}7Export Hadoop_common_home=${hadoop_install}8Export Hadoop_hdfs_home=${hadoop_install}9Export Yarn_home=${hadoop_installl}TenExport hadoop_common_lib_native_dir=${hadoop_install}/lib/Natvie OneExport hadoop_opts="-djava.library.path=${hadoop_install}/lib:${hadoop_install}/lib/native"
And then it's time to witness the miracle,
1 cd/opt/hadoop/
Format HDFs
1
Start HDFs
1 Sbin/start-dfs. SH 2 Sbin/start-yarn. SH
Theoretically, you'll see
1 starting namenodes on [localhost]2Localhost:starting Namenode, logging to/usr/opt/hadoop-2.6.0/logs/hadoop-hadoop-namenode-. out3Localhost:starting Datanode, logging to/usr/opt/hadoop-2.6.0/logs/hadoop-hadoop-datanode-. out4Starting secondary namenodes [0.0.0.0] 5 0.0.0.0: Starting Secondarynamenode, logging to/usr/opt/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-.out
Enter the URL 127.0.0.1:50070 can see the page of Hadoop, this shows that the success.
Reference:
Http://www.centoscn.com/hadoop/2015/0118/4525.html
http://blog.csdn.net/yinan9/article/details/16805275
Http://www.aboutyun.com/thread-10554-1-1.html
CentOS 6.5 pseudo-distributed installation Hadoop 2.6.0