1. Build a test cluster with 4 machines in the cluster and configure the/etc/hosts file for each machine in the cluster:
[Email protected].SSH]#Cat/etc/hosts127.0.0.1localhost localhost.localdomain localhost4 localhost4.localdomain4::1localhost localhost.localdomain localhost6 localhost6.localdomain6172.28.3.40nn Nn.hadoop.plat172.28.3.41dn1 Dn0.hadoop.plat172.28.3.42DN2 Dn1.hadoop.plat172.28.3.43DN3 Dn2.hadoop.plat
2. Configure Namenode to datanode ssh password-free login:
Execute on NN, ssh-keygen–t RSA
CD ~/.ssh
Cat Id_rsa.put >> Authorized_keys
Execute for each data node in the cluster: Ssh-copy-id [email protected] Ssh-copy-id [email protected] Ssh-copy-id [Emai L Protected]
This will ensure that the NN node can be password-free to DN1, DN2, DN3
[Email protected].SSH]#ifconfigeth0 Link encap:ethernet HWaddrxx: 1a:4a:c6:6b:a0 inet Addr:172.28.3.40Bcast:172.28.7.255Mask:255.255.248.0Inet6 addr:fe80::21a:4aff:fec6:6ba0/ -scope:link up broadcast RUNNING multicast MTU: theMetric:1RX Packets:1064845Errors0Dropped0Overruns:0Frame0TX Packets:557212Errors0Dropped0Overruns:0Carrier0Collisions:0Txqueuelen: +RX Bytes:1578655986(1.4GiB) TX Bytes:647178854(617.1MiB) Lo Link encap:local Loopback inet addr:127.0.0.1Mask:255.0.0.0Inet6 Addr: ::1/ -scope:host up LOOPBACK RUNNING MTU:16436Metric:1RX Packets:103276Errors0Dropped0Overruns:0Frame0TX Packets:103276Errors0Dropped0Overruns:0Carrier0Collisions:0Txqueuelen:0RX Bytes:58108687(55.4MiB) TX Bytes:58108687(55.4MiB) [email protected].SSH]#Cat/etc/hosts127.0.0.1localhost localhost.localdomain localhost4 localhost4.localdomain4::1localhost localhost.localdomain localhost6 localhost6.localdomain6172.28.3.40nn Nn.hadoop.plat172.28.3.41dn1 Dn0.hadoop.plat172.28.3.42DN2 Dn1.hadoop.plat172.28.3.43dn3 Dn2.hadoop.plat[[email protected].SSH]#SSHDn1siocaddrt:file exists
3. Close Iptables
Chkconfig iptables off
/etc/init.d/iptables stop
4. Turn off SELinux
To view the SELinux status:
[Email protected] ~]# /usr/sbin/sestatus–v
/usr/sbin/setenforce 0 #使SELinux工作模式变成permissive模式
/usr/sbin/setenforce 1 #使SELinux工作模式变成enforcing模式
This allows for real-time control of SELinux enabled and not enabled.
Three parameters Introduction
Enforcing-the SELinux security policy is enforced.
Permissive-the SELinux System prints warnings but does not enforce policy.
Disabled-selinux is fully disabled. SELinux Hooks is disengaged from the kernel and the pseudo-file system is unregistered.
permanently close SELinux
Edit/etc/selinux/config, find the SELinux line modified to become: selinux=disabled:
# This file controls the state of the SELinux on the system.
# selinux= can take one of these three values:
# Enforcing-selinux security policy is enforced.
# Permissive-selinux Prints warnings instead of enforcing.
# disabled-no SELinux policy is loaded.
Selinux=disabled
# selinuxtype= can take one of these the values:
# Targeted-only targeted Network daemons is protected.
# Strict-full SELinux protection.
selinuxtype=targeted
If you restart the system, you will see that the status of SELinux becomes disabled
5. Close the Linux kernel huge_page:
in/etc/rc.local and reboot the server:echo never >/sys/kernel/mm/redhat_transparent_ hugepage/enabledecho never >/sys/kernel/mm/redhat_transparent_hugepage/defrag
6. Install Java, configure Java_home
mkdir /usr//usr/-be/usr/java/jdk1. 7 . 0_75 Ln -s/usr/java/jdk1. 7. 0_75/usr/java//etc/172.28. 0.1 export java_home=/usr/java/defaultexport PATH= $JAVA _home/bin: $PATH
Edit the/etc/profile, perform source/etc/profile to make the configuration effective, ensure that the Java version on each machine is consistent, and that the JAVA_HOME environment variable is valid:
7. Install NTPD on each machine
grep ntpd Yum Install ntpdchkconfig ntpd onservice ntpd start
Make sure that the NTPD service for each machine is in the running state:
8. Make sure the Openssh-server is installed on the machine and upgrade OpenSSL to the latest:
grep SSH Yum Install openssh-serverservice sshd restartchkconfig sshd on
Make sure that OpenSSL is up to date:
Yum Install openssl-devel-1.01e-.el6.x86_64
9. Yum source ensures that it can be used, this installation uses 163 of the Yum source, first the/etc/yum.repos.d/, all the files are deleted, and then create a new file Centos6-base-163.repo, fill in the following content:
# centos-base.repo## The mirror system uses the connecting IP address of the client and the# update status of each mirror to Pi CK mirrors that is updated to and# geographically close to the client. You should use this forCentOS updates# Unless you is manually picking other mirrors.## If the mirrorlist= Does not work forYou , as a fall back to can try the# remarked out BaseURL=Line Instead.##[base]name=centos-$releasever-base-163. Combaseurl=http://mirrors.163.com/centos/$releasever/os/$basearch/#mirrorlist =http://mirrorlist.centos.org/?release= $releasever &arch= $basearch &repo=osgpgcheck=1Gpgkey=http://mirror.centos.org/centos/rpm-gpg-key-centos-6#released Updates[updates]name=centos-$releasever-updates-163. Combaseurl=http://mirrors.163.com/centos/$releasever/updates/$basearch/#mirrorlist =http://mirrorlist.centos.org/?release= $releasever &arch= $basearch &repo=updatesgpgcheck=1Gpgkey=http://mirror.centos.org/centos/rpm-gpg-key-centos-6#additional Packages that could be useful[extras]name=centos-$releasever-extras-163. Combaseurl=http://mirrors.163.com/centos/$releasever/extras/$basearch/#mirrorlist =http://mirrorlist.centos.org/?release= $releasever &arch= $basearch &repo=extrasgpgcheck=1Gpgkey=http://mirror.centos.org/centos/rpm-gpg-key-centos-6#additional packages that extend functionality of existing Packages[centosplus]name=centos-$releasever-plus-163. Combaseurl=http://mirrors.163.com/centos/$releasever/centosplus/$basearch/#mirrorlist =http://mirrorlist.centos.org/?release= $releasever &arch= $basearch &repo=centosplusgpgcheck=1enabled=0Gpgkey=http://mirror.centos.org/centos/rpm-gpg-key-centos-6#contrib-packages by Centos Users[contrib]name=centos-$releasever-contrib-163. Combaseurl=http://mirrors.163.com/centos/$releasever/contrib/$basearch/#mirrorlist =http://mirrorlist.centos.org/?release= $releasever &arch= $basearch &repo=contribgpgcheck=1enabled=0Gpgkey=http://mirror.centos.org/centos/rpm-gpg-key-centos-6
Configure the Ambari yum source, which is deployed on an Apache server in the local LAN:
[[Email protected]Yum. repos.d]#CatAmbari.repo [Updates-ambari-2.0.1]name=ambari-2.0.1-Updatesbaseurl=http://172.28.4.159/AMBARI-TEST/CENTOS6gpgcheck=1Gpgkey=http://172.28.4.159/ambari-test/centos6/rpm-gpg-key/rpm-gpg-key-jenkinsEnabled=1 Priority=1[[Email protected]Yum. repos.d]#
Once the Yum source is configured properly, execute the following command:
Yum Clean All Yum repolist
10. On the NN machine, install Ambari-servier, execute the following command, the reason is to add-nogpgcheck parameter, because this is installed in the company's modified Ambari, if the original Ambari is installed, do not add this option:
Yum Install --nogpgcheck ambari-server
11. Configure and start Ambari-server, setup–j Configure the Java environment to be used by Ambari-server:
Ambari-server setup-j/usr/java/defaultambari-server start
12. In the browser, enter nn:8080 into the Ambari landing page, the user name and password are admin:
13. Configure the BaseURL of the HDP Redhat6, which is used in the HDP installation source on the local LAN:
http://172.28.4.159/HDP2.2.6/HDP/centos6/2.x/updates/2.2.6.0/
http://172.28.4.159/HDP2.2.6/HDP-UTILS-1.1.0.20/repos/centos6/
14. Under the NN node, upload the/root/.ssh/id_rsa file to the Ambari and configure the target Hosts:
15. A warning message is found and the following command is executed on each machine to eliminate the warning:
[[Email protected]Yum. repos.d]# Python/usr/lib/python2.6/site-packages/ambari_agent/hostcleanup.py--silent--skip=usersINFO:HostCleanup:Killing pid's: ["']INFO:HostCleanup:Deleting Packages: ["']info:hostcleanup:deleting Directories: ["']info:hostcleanup:path doesn'T exists:INFO:HostCleanup:Deleting Additional directories: ["']info:hostcleanup:deleting repo files: []info:hostcleanup:erasing alternatives:{'symlink_list': ["'],'target_list': ["']}info:hostcleanup:path doesn'T exists:Info:hostcleanup:clean-up completed. The output is At/var/lib/ambari-agent/data/hostcleanup.result
16. Assigning Slaves and clients:
Set up the user name and password for hive and Oozie database:
View summary information:
17. Enter the installation process and the final installation is successful:
cent OS 6.5+AMBARI+HDP cluster installation