first, the basic Environment configuration
I use three virtual hosts, the operating system is CENTOS7. Hadoop version 2.6, hive2.1.1 version (can be downloaded to the official website), JDK7, Scala2.11.0, zookeeper3.4.5 II, installation tutorial
(1) Installation of JDK
From the official website to download the JDK to the local, and then through the FTP to the Linux system, directly decompression, decompression after the configuration environment variables, my environment variable configuration is as follows:
java_home=/usr/java/jdk1.7.0_80
jre_home= $JAVA _home/jre
class_path=.: $JAVA _home/lib/dt.jar: $JAVA _home/ Lib/tools.jar: $JRE _home/lib
path= $PATH: $JAVA _home/bin: $JRE _home/bin
export java_home jre_home class_path PATH
Source/etc/profile Make variables effective
With Java and javac you can detect if the installation is successful!
(2) Configure SSH password-free login
Ssh-keygen
Append the id_rsa.pub to the authorized key. Cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys Modify File "Authorized_keys" permission chmod /authorized_keys Setting up SSH configuration
Vi/etc/ssh/sshd_config
//The following three items are modified to the following configuration
rsaauthentication Yes # Enable RSA authentication
pubkeyauthentication Yes # Enable public key private key pairing authentication Mode
authorizedkeysfile. Ssh/authorized_keys # Public key file path (same as the file generated above)
Restart the SSH service
Service sshd Restart
1 The public key is replicated on all slave machines
SCP ~/.ssh/id_rsa.pub Remote User name @ Remote server ip:~/
SCP ~/.ssh/id_rsa.pub root@192.168.1.125:~/
SCP ~/.ssh/id_rsa.pub root@192.168.1.124:~/
Create the. ssh folder on the slave host
mkdir ~/.ssh
//Modify Permissions
chmod ~/.ssh
Append to authorization File "Authorized_keys"
Cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
/Modify Permissions
chmod ~/.ssh/authorized_keys
Delete useless. pub files
Rm–r ~/id_rsa.pub
1
Test under Master Host
SSH 192.168.1.125
ssh 192.168.1.124
//If able to login slave1 without password, slave2 host, then successfully configured
(3) Hadoop installation unzip files from Hadoop
Create the TMP folder under "/usr/hadoop"
Cd/usr/hadoop
mkdir tmp
Setting environment variables
Export hadoop_home=/usr/hadoop/hadoop-2.6.5
Export path= $PATH: $HADOOP _home/bin: $HADOOP _home/sbin
Source/etc/profile
Setting the Java environment variables in hadoop-env.sh and yarn-env.sh
Cd/usr/hadoop/etc/hadoop/vi hadoop-env.sh//Modify Java_homeexport java_home=/usr/java/jdk1.7
Configuration Core-site.xml File vi core-site.xml//Modify the file contents to the following
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop/tmp</value>
<description>a base for other temporary directories.</description>
</property> <property> <name>fs.default.name</name>
<value>hdfs://Master.Hadoop:9000</value> </property></configuration>
Configuring the Hdfs-site.xml File
VI hdfs-site.xml//Modify the contents of the file for the following <configuration> <property> <NAME>DFS.NAMENODE.N
Ame.dir</name> <value>file:///usr/hadoop/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///u sr/hadoop/dfs/data</value> </property> <property> <name>dfs.replic ation</name> <value>1</value> </property> <property> &L t;name>dfs.nameservices</name> <value>hadoop-cluster1</value> </property> <p Roperty> <name>dfs.namenode.secondary.http-address</name> <value>master.hadoop:50090& lt;/value> </property> <property> <name>dfs.webhdfs.enabled</name> < value>true</value&Gt </property> </configuration>
Configuring the Mapred-site.xml File
VI mapred-site.xml
//Modify files for the following
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<final>true</final>
</property>
< property>
<name>mapreduce.jobtracker.http.address</name>
<value>master.hadoop :50030</value>
</property>
<