Original article, reprint please specify: http://blog.csdn.net/lsttoy/article/details/52318232
Oops, you can also directly github to download all the materials mentioned in my article, are open Source:) https://github.com/lekko1988/hadoop.git
General idea, prepare master and slave server, configure the main server can not password SSH login from the server, decompression installation JDK, decompression installation Hadoop, configuration HDFs, MapReduce and other master-slave relationship.
1, the environment, 3 centos7,64 bit, Hadoop2.7 need 64 linux,centos7 minimal ISO file only 600M, the operating system can be installed more than 10 minutes to complete,
Master 192.168.0.182
Slave1 192.168.0.183
Slave2 192.168.0.184
2, SSH password-free login, because Hadoop needs to login through SSH to the various nodes to operate, I use the root user, each server generates a public key, and then merged into the Authorized_keys
(1) CentOS default does not start SSH no secret login, remove/etc/ssh/sshd_config 2 lines of comments, each server will be set,
#RSAAuthentication Yes
#PubkeyAuthentication Yes
(2) Input command, ssh-keygen-t RSA, generate key, do not enter the password, always return,/root will be generated. SSH folder, each server is set up,
(3) Merge public key to Authorized_keys file, on master server, enter/root/.ssh directory, merge via SSH command,
Cat id_rsa.pub>> Authorized_keys
SSH root@192.168.0.183 Cat ~/.ssh/id_rsa.pub>> Authorized_keys
SSH root@192.168.0.184 Cat ~/.ssh/id_rsa.pub>> Authorized_keys
(4) Copy the Authorized_keys and known_hosts of master server to the/ROOT/.SSH directory of Slave server
(5) Complete, ssh root@192.168.0.183, ssh root@192.168.0.184 no need to enter the password
3, installation jdk,hadoop2.7 need JDK7, because my centos is minimal installation, so there is no openjdk, directly extract the download JDK and configure variables can be
(1) Download "jdk-7u79-linux-x64.gz" and put it in the/home/java directory
(2) Decompression, input command, TAR-ZXVF jdk-7u79-linux-x64.gz
(3) Edit/etc/profile
Export java_home=/home/java/jdk1.7.0_79
Export classpath=.: $JAVA _home/jre/lib/rt.jar: $JAVA _home/lib/dt.jar: $JAVA _home/lib/tools.jar
Export path= $PATH: $JAVA _home/bin
(4) Make configuration effective, enter command, Source/etc/profile
(5) Input command, java-version, complete
4, install Hadoop2.7, only in the master server decompression, and then copied to the slave server
(1) Download "hadoop-2.7.0.tar.gz" and put it in the/home/hadoop directory
(2) Decompression, input command, TAR-XZVF hadoop-2.7.0.tar.gz
(3) Create a folder for data storage in the/home/hadoop directory, TMP, HDFS, Hdfs/data, Hdfs/name
5, configure the Core-site.xml in the/home/hadoop/hadoop-2.7.0/etc/hadoop directory
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.0.182:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
</configuration>
6, configure the Hdfs-site.xml in the/home/hadoop/hadoop-2.7.0/etc/hadoop directory
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>192.168.0.182:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
7, configure the Yarn-site.xml in the/home/hadoop/hadoop-2.7.0/etc/hadoop directory
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>192.168.0.182:10020</value>
</property>
<property>