5.10 check startup status
1. System Environment DescriptionCentOS 7.0x64
192.168.1.7 master
192.168.1.8 slave
192.168.1.9 slave
192.168.1.10 slave
2. Preparations before installation
2.1 disable Firewall# Systemctl status firewalld. service -- view the Firewall status # systemctl stop firewalld. service -- disable the firewall # systemctl disable firewalld. service -- permanently disable the Firewall
2.2 check ssh installation. If not, install ssh# Systemctl status sshd. service -- View ssh status # yum install openssh-server openssh-clients
2.3 install vim# yum -y install vim
2.4 set static IP addresses# vim /etc/sysconfig/network-scripts/ifcfg-eno16777736
BOOTPROTO = "static"
ONBOOT = "yes"
IPADDR0 = "192.168.1.7"
PREFIX0 = "255.255.255.0"
GATEWAY0 = "192.168.1.1"
DNS1 = "61.147.37.1"
DNS2 = "101.226.4.6"
2.5 Modify host name# vim /etc/sysconfig/network
HOSTNAME = master
# vim /etc/hosts
192.168.1.7 master192.168.1.8 slave1192.168.1.9 slave2192.168.1.10 slave3
# Hostnamectl set-hostname master (the original host modification method under CentOS7 is invalid)
2.6 create a hadoop user# Useradd hadoop -- create a user whose username is hadoop # passwd hadoop -- set a password for hadoop
2.7 configure ssh keyless Logon---- The following operations on the master
# Su hadoop -- switch to hadoop user $ cd ~ -- Open the user folder $ ssh-keygen-t rsa-p' -- generate a password pair,/home/hadoop /. ssh/id_rsa and/home/hadoop /. ssh/id_rsa.pub $ cat ~ /. Ssh/id_rsa.pub> ~ /. Ssh/authorized_keys -- append id_rsa.pub to the authorization key to $ chmod 600 ~ /. Ssh/authorized_keys -- modify permission $ su -- switch to root user # vim/etc/ssh/sshd_config -- modify ssh configuration file RSAAuthentication yes # enable RSA Authentication PubkeyAuthentication yes # enable public key/private key pair Authentication method AuthorizedKeysFile. ssh/authorized_keys # public key file path # su hadoop -- switch to hadoop user $ scp ~ /. Ssh/id_rsa.pub hadoop@192.168.1.8 :~ /-- Copy the public key to all Slave machines
---- The following operations on slave1
# Su hadoop -- switch to hadoop user $ mkdir ~ /. Ssh $ chmod 700 ~ /. Ssh $ cat ~ /Id_rsa.pub> ~ /. Ssh/authorized_keys -- append to the authorization file "authorized_keys" $ chmod 600 ~ /. Ssh/authorized_keys -- modify permission $ su -- switch back to root user # vim/etc/ssh/sshd_config -- modify ssh configuration file RSAAuthentication yes # enable RSA Authentication PubkeyAuthentication yes # enable public key private key pair authentication Method AuthorizedKeysFile. ssh/authorized_keys # public key file path
3. install necessary software
3.1 install JDK# rpm -ivh jdk-7u67-linux-x64.rpm
Preparing...
##################################### [100%]
1: jdk
##################################### [100%]
Unpacking JAR files...
Rt. jar...
Jsse. jar...
Charsets. jar...
Tools. jar...
Localedata. jar...
# Vim/etc/profile export JAVA_HOME =/usr/java/jdk1.7.0 _ 67 export PATH = $ PATH: $ JAVA_HOME/bin # source profile -- modification takes effect
3.2 install other required software# yum install maven svn ncurses-devel gcc* lzo-devel zlib-devel autoconf automake libtool cmake openssl-devel
3.3 install ant# tar zxvf apache-ant-1.9.4-bin.tar.gz# vim /etc/profile export ANT_HOME=/usr/local/apache-ant-1.9.4 export PATH=$PATH:$ANT_HOME/bin
3.4 install findbugs# tar zxvf findbugs-3.0.0.tar.gz# vim /etc/profile export FINDBUGS_HOME=/usr/local/findbugs-3.0.0 export PATH=$PATH:$FINDBUGS_HOME/bin
3.5 install protobuf# Tar zxvf protobuf-2.5.0.tar.gz (must be 2.5.0 version, otherwise an error is reported when hadoop is compiled) # cd protobuf-2.5.0 #./configure -- prefix =/usr/local # make & make install
4. Compile hadoop source code# tar zxvf hadoop-2.5.0-src.tar.gz# cd hadoop-2.5.0-src# mvn package -Pdist,native,docs -DskipTests -Dtar
4.1 configure maven central repository (changed to oschina to increase access speed)# vim /usr/share/mavem/conf/settings.xml
nexus-osc
*
Nexus osc
http://maven.oschina.net/content/groups/public/
jdk17
true
1.7
1.7
1.7
1.7
nexus
local private nexus
http://maven.oschina.net/content/groups/public/
true
false
nexus
local private nexus
http://maven.oschina.net/content/groups/public/
true
false
4.2 After compilation, directory/usr/hadoop-2.5.0-src/hadoop-dist/target/hadoop-2.5.0# ./bin/hadoop versionHadoop 2.5.0Subversion Unknown -r UnknownCompiled by root on 2014-09-12T00:47ZCompiled with protoc 2.5.0From source with checksum 423dcd5a752eddd8e45ead6fd5ff9a24This command was run using /usr/hadoop-2.5.0-src/hadoop-dist/target/hadoop-2.5.0/share/hadoop/common/hadoop-common-2.5.0.jar# file lib//native/*lib//native/libhadoop.a: current ar archivelib//native/libhadooppipes.a: current ar archivelib//native/libhadoop.so: symbolic link to `libhadoop.so.1.0.0'lib//native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=0x972b31264a1ce87a12cfbcc331c8355e32d0e774, not strippedlib//native/libhadooputils.a: current ar archivelib//native/libhdfs.a: current ar archivelib//native/libhdfs.so: symbolic link to `libhdfs.so.0.0.0'lib//native/libhdfs.so.0.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=0x200ccf97f44d838239db3347ad5ade435b472cfa, not stripped
5. Configure hadoop
5.1 basic operations# cp -r /usr/hadoop-2.5.0-src/hadoop-dist/target/hadoop-2.5.0 /opt/hadoop-2.5.0# chown -R hadoop:hadoop /opt/hadoop-2.5.0# vi /etc/profile export HADOOP_HOME=/opt/hadoop-2.5.0 export PATH=$PATH:$HADOOP_HOME/bin# su hadoop$ cd /opt/hadoop-2.5.0$ mkdir -p dfs/name$ mkdir -p dfs/data$ mkdir -p tmp$ cd etc/hadoop
5.2 configure all slave nodes$ vim slavesslave1slave2slave3
5.3 modify hadoop-env.sh and yarn-env.sh$ vim hadoop-env.sh / vim yarn-env.shexport JAVA_HOME=/usr/java/jdk1.7.0_67
5.4 modify core-site.xml
Fs. defaultFS
Hdfs: // master: 9000
Io. file. buffer. size
131702
Hadoop. tmp. dir
File:/opt/hadoop-2.5.0/tmp
Hadoop. proxyuser. hadoop. hosts
Hadoop. proxyuser. hadoop. groups
5.5 modify hdfs-site.xml
Dfs. namenode. name. dir
/Opt/hadoop-2.5.0/dfs/name
Dfs. datanode. data. dir
/Opt/hadoop-2.5.0/dfs/data
Dfs. replication
3
Dfs. namenode. secondary. http-address
Master: 9001
Dfs. webhdfs. enabled
True
5.6 modify mapred-site.xml# cp mapred-site.xml.template mapred-site.xml
Mapreduce. framework. name
Yarn
Mapreduce. jobhistory. address
Master: 10020
Mapreduce. jobhistory. webapp. address
Master: 19888
5.7 configure yarn-site.xml
Yarn. nodemanager. aux-services
Mapreduce_shuffle
Yarn. nodemanager. auxservices. mapreduce. shuffle. class
Org. apache. hadoop. mapred. ShuffleHandler
Yarn. resourcemanager. address
Master: 8032
Yarn. resourcemanager. schedager. address
Master: 8030
Yarn. resourcemanager. resource-tracker.address
Master: 8031
Yarn. resourcemanager. admin. address
Master: 8033
Yarn. resourcemanager. webapp. address
Master: 8088
Yarn. nodemanager. resource. memory-mb
768
5.8 format namenode$ ./bin/hdfs namenode -format
5.9 start hdfs$ ./sbin/start-dfs.sh$ ./sbin/start-yarn.sh
5.10 check startup statushttp://192.168.1.7:8088http://192.168.1.7:50070