1. Preparing the Linux environment
1.1 Shutting down the firewall
#查看防火墙状态
Service Iptables Status
#关闭防火墙
Service Iptables Stop
#查看防火墙开机启动状态
Chkconfig iptables--list
#关闭防火墙开机启动
Chkconfig iptables off
1.2 Modifying sudo
Su Root
Vim/etc/sudoers
Add execute permissions to Hadoop users
Hadoop all= (All) all
To close the Linux server's graphical interface:
Vi/etc/inittab
1.3 Restarting Linux
Reboot
2. Installing the Java JDK
2.1 An SFTP window appears after uploading alt+p, then put d:\xxx\yy\ll\jdk-7u_65-i585.tar.gz
2.2 Unpacking the JDK
#创建文件夹
Mkdir/home/hadoop/app
#解压
TAR-ZXVF jdk-7u55-linux-i586.tar.gz-c/home/hadoop/app
2.3 Adding Java to an environment variable
Vim/etc/profile
#在文件最后添加
Export java_home=/home/hadoop/app/jdk-7u_65-i585
Export path= $PATH: $JAVA _home/bin
#刷新配置
Source/etc/profile
Upload the Hadoop installation package to the server first/home/hadoop/
Note: The hadoop2.x configuration file $hadoop_home/etc/hadoop
Pseudo-distributed requires 5 configuration files to be modified
3.1 Configuring Hadoop
First one: hadoop-env.sh
Vim hadoop-env.sh
#第27行
Export java_home=/usr/java/jdk1.7.0_65-------------------JAVA installation directory
The second one: Core-site.xml
<!--Specifies the file system schema (URI) used by Hadoop, the address of the boss of HDFs (NameNode)-->
<property>
<name> Fs.defaultfs</name>
<value>hdfs://lanpeng:9000</value>
</property>
<!-- Specifies the storage directory where the Hadoop runtime produces files-->
<property>
<name>hadoop.tmp.dir</name>
<value>/ Home/hadoop/hdpdata</value>
</property>
Third: Hdfs-site.xml
<!--Specify the number of HDFs replicas by default is 3--
<property>
<name>dfs.replication</name>
<value>2</value>
</ Property>
<property>
<name>dfs.secondary.http.address</name>
<value> 192.168.1.152:50090</value>
</property>
Fourth: Mapred-site.xml (MV Mapred-site.xml.template mapred-site.xml)
MV Mapred-site.xml.template Mapred-site.xml
Vim Mapred-site.xml
<!--specify Mr to run on yarn--
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
Fifth one: Yarn-site.xml
<!--Specify the address of yarn's boss (ResourceManager)---
<property>
<name>yarn.resourcemanager.hostname</name>
<value>weekend-1206-01</value>
</property>
<!--Reducer How to get data--
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
3.2 Adding Hadoop to an environment variable
Vim/etc/proflie
Export java_home=/usr/java/jdk1.7.0_65
Export hadoop_home=/itcast/hadoop-2.6.4
Export path= $PATH: $JAVA _home/bin: $HADOOP _home/bin: $HADOOP _home/sbin
Source/etc/profile
3.3 Formatting Namenode (initialization of Namenode)
HDFs Namenode-format (Hadoop namenode-format)
3.4 Starting Hadoop
Start HDFs First
sbin/start-dfs.sh
Start yarn Again
sbin/start-yarn.sh
3.5 Verifying whether the startup was successful
Using the JPS command to verify
27408 NameNode
28218 Jps
27643 Secondarynamenode
28066 NodeManager
27803 ResourceManager
27512 DataNode
http://192.168.1.101:50070 (HDFs management interface)
http://192.168.1.101:8088 (Mr Management interface)
4. Configure SSH Free Login
#生成ssh免登陆密钥
#进入到我的home目录
CD ~/.ssh
Ssh-keygen-t RSA (four return)
After executing this command, two files Id_rsa (private key), id_rsa.pub (public key) will be generated
Copy the public key to the target machine for the secret login
Ssh-copy-id localhost
Problem
1, may encounter Linux version and Hadoop version does not match the program will issue a warning, need to recompile Hadoop,
2, the server time is not synchronized
3, Datenote folder permissions issues, resulting in namenote cannot access Datanote directory
(12) Hadoop installation configuration under Linux