Install Hadoop 2.7.1 on CentOS 7

Source: Internet
Author: User

Install Hadoop 2.7.1 on CentOS 7

Two machines CentOS7 (machine names are master-CentOS7, slave-CentOS7) memory 2G
There are some differences between CentOS7 and CetnOS6

Network Configuration

Master-CentOS7

[Root @ localhost ~] # Vi/etc/sysconfig/network-scripts/ifcfg-eno16777736TYPE = EthernetBOOTPROTO = staticDEFROUTE = yesPEERDNS = yesPEERROUTES = route = notes6init = yesIPV6_AUTOCONF = route = protocol = noNAME = protocol = b30f5765-ecd7-4dba-a0ed-ebac92c836bdDEVICE = eno167736onboot = yesIPADDR = 192.168.1.182NETMASK = 255.255.255.0GATEWAY = 192.168.1.1DNS1 = 114.114.1 14.114DNS2 = 8.8.4.4 network information is configured according to your actual network conditions. [Root @ localhost ~] # Systemctl restart network [root @ localhost ~] # Ifconfig

Slave-CentOS7

[Root @ localhost ~] # Vi/etc/sysconfig/network-scripts/ifcfg-eno16777736TYPE = EthernetBOOTPROTO = staticDEFROUTE = yesPEERDNS = yesPEERROUTES = route = notes6init = yesIPV6_AUTOCONF = route = protocol = noNAME = protocol = b30f5765-ecd7-4dba-a0ed-ebac92c836bdDEVICE = eno167736onboot = yesIPADDR = 192.168.1.183NETMASK = 255.255.255.0GATEWAY = 192.168.1.1DNS1 = 114.114.1 14.114DNS2 = 8.8.4.4 network information is configured according to your actual network conditions. [Root @ localhost ~] # Systemctl restart network [root @ localhost ~] # Ifconfig
Set hosts and hostname

Master-CentOS7

[Root @ localhost ~] # Vi/etc/hosts add 192.168.1.182 master192.168.1.183 slave [root @ localhost ~] # Change vi/etc/hostnamelocalhost. localdomain content to master

Slave-CentOS7

[Root @ localhost ~] # Vi/etc/hosts add 192.168.1.182 master192.168.1.183 slave [root @ localhost ~] # Change vi/etc/hostnamelocalhost. localdomain content to slave
Disable selinux

Master-CentOS7

[Root @ master ~] # GetenforceEnforcing [root @ master ~] # Vi/etc/selinux/configSELINUX = enforcing change to SELINUX = disabled save and restart [root @ master ~] # GetenforceDisabled

Slave-CentOS7

[Root @ slave ~] # GetenforceEnforcing [root @ slave ~] # Vi/etc/selinux/configSELINUX = enforcing change to SELINUX = disabled save and restart [root @ slave ~] # GetenforceDisabled
Disable firewalld

Master-CentOS7

[Root @ master ~] # Systemctl disable firewalldRemoved symlink/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.Removed symlink/etc/systemd/system/basic.tar get. wants/firewalld. service. [root @ master ~] # Systemctl stop firewalld [root @ master ~] # Iptables-nvLChain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destinationChain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destinationChain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination [root @ master ~] # Yum install-y iptables-services [root @ master ~] # Service iptables saveiptables: Saving firewall rules to/etc/sysconfig/iptables: [OK] [root @ master ~] # Systemctl enable iptablesCreated symlink from/etc/systemd/system/basic.tar get. wants/iptables. service to/usr/lib/systemd/system/iptables. service.

Slave-CentOS7

[Root @ slave ~] # Systemctl disable firewalldRemoved symlink/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.Removed symlink/etc/systemd/system/basic.tar get. wants/firewalld. service. [root @ slave ~] # Systemctl stop firewalld [root @ slave ~] # Iptables-nvLChain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination [root @ slave ~] # Yum install-y iptables-services [root @ slave ~] # Service iptables saveiptables: Saving firewall rules to/etc/sysconfig/iptables: [OK] [root @ slave ~] # Systemctl enable iptablesCreated symlink from/etc/systemd/system/basic.tar get. wants/iptables. service to/usr/lib/systemd/system/iptables. service.
Key Login

Master-CentOS7

[Root @ master ~] # Ssh-keygen always press enter [root @ master ~] # Cat. ssh/id_rsa.pub copy ~ /. Ssh/id_rsa.pub content

Slave-CentOS7

[Root @ slave ~] # Vi. ssh/authorized_keys copy ~ /. Ssh/id_rsa.pub content ~ /. Ssh/authorized_keys error ". ssh/authorized_keys" E212: Can't open file for writing solution [root @ slave ~] # Ls-ld. sshls: inaccessible. ssh: No file or directory [root @ slave ~] # Mkdir. ssh; chmod 700. ssh [root @ slave ~] # Ls-ld. sshdrwx ------ 2 root 6 August 28 15:59. ssh [root @ slave ~] # Vi. ssh/authorized_keys copy ~ /. Ssh/id_rsa.pub content ~ /. Ssh/authorized_keys [root @ slave ~] # Ls-l! $ Ls-l. ssh/authorized_keys-rw-r -- r -- 1 root 418 August 28 16:02. ssh/authorized_keys

Master-CentOS7

[Root @ master ~] # Vi. ssh/authorized_keys copy ~ /. Ssh/id_rsa.pub content ~ /. Ssh/authorized_keys

Test

Master-CentOS7

[root@master ~]# ssh master[root@master ~]# exit[root@master ~]# ssh slave[root@slave ~]# exit
Install JDK

Hadoop2.7 requires jdk1.7, http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

First uninstall the JDK built-in CetnOS7 take slave-CentOS7 as an Example (Master-CetnOS7, slave-CentOS7 must uninstall CetnOS7 built-in JDK)

[Root @ slave ~] # Java-versionopenjdk version "1.8.0 _ 101" OpenJDK Runtime Environment (build 1.8.0 _ 101-b13) OpenJDK 64-Bit Server VM (build 25.101-b13, mixed mode) [root @ master ~] # Rpm-qa | grep jdkjava-1.7.0-openjdk-headless-1.7.0.111-2.6.7.2.el7_2.x86_64java-1.8.0-openjdk-1.8.0.101-3.b13.el7_2.x86_64java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el7_2.x86_64java-1.7.0-openjdk-1.7.0.111-2.6.7.2.el7_2.x86_64 [root @ slave ~] # Yum-y remove java-1.7.0-openjdk-headless-1.7.0.111-2.6.7.2.el7_2.x86_64 [root @ slave ~] # Yum-y remove java-1.8.0-openjdk-1.8.0.101-3.b13.el7_2.x86_64 [root @ slave ~] # Yum-y remove java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el7_2.x86_64 [root @ slave ~] # Java-version-bash:/usr/bin/java: the file or directory does not exist.

Master-CentOS7

[Root @ master ~] # Wget 'HTTP: // download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz? AuthParam = 1472372876_f3205a608139acb432d3c48638502428 '[root @ master ~] # Music jdk-7u79-linux-x64.tar.gz \? AuthParam \ = 1472372876_f3205a608139acb432d3c48638502428 jdk-7u79-linux-x64.tar.gz [root @ master ~] # Tar zxvf jdk-7u79-linux-x64.tar.gz [root @ master ~] # Mv jdk1.7.0 _ 79/usr/local/[root @ master ~] # Vi/etc/profile. d/java. sh add export JAVA_HOME =/usr/local/jdk1.7.0 _ 79 export CLASSPATH =.: $ JAVA_HOME/jre/lib/rt. jar: $ JAVA_HOME/lib/dt. jar: $ JAVA_HOME/lib/tools. jarexport PATH = $ PATH: $ JAVA_HOME/bin [root @ master ~] # Source! $ Source/etc/profile. d/java. sh [root @ master ~] # Java-versionjava version "1.7.0 _ 79" Java (TM) SE Runtime Environment (build 1.7.0 _ 79-b15) Java HotSpot (TM) 64-Bit Server VM (build 24.79-b02, mixed mode) [root @ master ~] # Scp jdk-7u79-linux-x64.tar.gz slave:/root/[root @ master ~] # Scp/etc/profile. d/java. sh slave:/etc/profile. d/

Slave-CentOS7

[root@slave ~]# tar zxvf jdk-7u79-linux-x64.tar.gz[root@slave ~]# mv jdk1.7.0_79 /usr/local/[root@slave ~]# source /etc/profile.d/java.sh[root@slave ~]# java -versionjava version "1.7.0_79"Java(TM) SE Runtime Environment (build 1.7.0_79-b15)Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
Install Hadoop

Master-CentOS7

[root@master ~]# wget 'http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz'[root@master ~]# tar zxvf hadoop-2.7.1.tar.gz[root@master ~]# mv hadoop-2.7.1 /usr/local/Hadoop[root@master ~]# ls !$ls /usr/local/hadoopbin include libexec NOTICE.txt sbinetc lib LICENSE.txt README.txt share[root@master ~]# mkdir /usr/local/hadoop/tmp /usr/local/hadoop/dfs /usr/local/hadoop/dfs/data /usr/local/hadoop/dfs/name[root@master ~]# ls /usr/local/hadoopbin dfs etc include lib libexec LICENSE.txt NOTICE.txt README.txt sbin share tmp[root@master ~]# rsync -av /usr/local/hadoop slave:/usr/local

Slave-CentOS7

[root@slave ~]# ls /usr/local/hadoopbin etc lib LICENSE.txt README.txt sharedfs include libexec NOTICE.txt sbin tmp
Configure Hadoop

Master-CentOS7

[Root @ master ~] # Vi/usr/local/hadoop/etc/hadoop/core-site.xml add <configuration> <property> <name> fs. defaultFS </name> <value> hdfs: // 192.168.1.182: 9000 </value> </property> <name> hadoop. tmp. dir </name> <value> file:/usr/local/hadoop/tmp </value> </property> <name> io. file. buffer. size </name> <value> 131702 </value> </property> </configuration> note the IP address of the master-CentOS7 host [root @ master ~] # Add <configuration> <property> <name> dfs to vi/usr/local/hadoop/etc/hadoop/hdfs-site.xml. namenode. name. dir </name> <value> file:/usr/local/hadoop/dfs/name </value> </property> <name> dfs. datanode. data. dir </name> <value> file:/usr/local/hadoop/dfs/data </value> </property> <name> dfs. replication </name> <value> 2 </value> </property> <name> dfs. namenode. secondary. http-address </nam E> <value> 192.168.1.182: 9001 </value> </property> <name> dfs. webhdfs. enabled </name> <value> true </value> </property> </configuration> pay attention to the IP address of the master-CentOS7 host [root @ master ~] # Mv/usr/local/hadoop/etc/hadoop/mapred-site.xml.template/usr/local/hadoop/etc/hadoop/mapred-site.xml [root @ master ~] # Vi/usr/local/hadoop/etc/hadoop/mapred-site.xml add <configuration> <property> <name> mapreduce. framework. name </name> <value> yarn </value> </property> <name> mapreduce. jobhistory. address </name> <value> 192.168.1.182: 10020 </value> </property> <name> mapreduce. jobhistory. webapp. address </name> <value> 192.168.1.182: 19888 </value> </property> </configuration> pay attention to the IP address of the master-CentOS7 host [root @ mast Er ~] # Vi/usr/local/hadoop/etc/hadoop/yarn-site.xml add <configuration> <! -- Site specific YARN configuration properties --> <property> <name> yarn. nodemanager. aux-services </name> <value> mapreduce_shuffle </value> </property> <name> yarn. nodemanager. auxservices. mapreduce. shuffle. class </name> <value> org. apache. hadoop. mapred. shuffleHandler </value> </property> <name> yarn. resourcemanager. address </name> <value> 192.168.1.182: 8032 </value> </property> <propert Y> <name> yarn. resourcemanager. scheduler. address </name> <value> 192.168.1.182: 8030 </value> </property> <name> yarn. resourcemanager. resource-tracker.address </name> <value> 192.168.1.182: 8031 </value> </property> <name> yarn. resourcemanager. admin. address </name> <value> 192.168.1.182: 8033 </value> </property> <name> yarn. resourcemanager. webapp. address </name> <value> 192.168.1.1 8088 </value> </property> <name> yarn. nodemanager. resource. memory-mb </name> <value> 2048 </value> </property> </configuration> note the IP address of the master-CentOS7 host [root @ master ~] # Cd/usr/local/hadoop/etc/hadoop [root @ master hadoop] # vi hadoop-env.sh Change export JAVA_HOME =/usr/local/jdk1.7.0 _ 79 [root @ master hadoop] # vi yarn-env.sh Change export JAVA_HOME =/usr/local/jdk1.7.0 _ 79 [root @ master hadoop] # vi slaves change to 192.168.1.183 pay attention to slave-CentOS7 IP [root @ master hadoop] # rsync-av /usr/local/hadoop/etc/slave: /usr/local/hadoop/etc/

Slave-CentOS7

[Root @ slave ~] # Cd/usr/local/hadoop/etc/hadoop/[root @ slave hadoop] # cat slaves192.168.1.183 check if slave is correct
Start Hadoop

Master-CentOS7

[root@master hadoop]# /usr/local/hadoop/bin/hdfs namenode -format[root@master hadoop]# echo $?0[root@master hadoop]# /usr/local/hadoop/sbin/start-all.sh[root@master hadoop]# jps19907 ResourceManager19604 SecondaryNameNode19268 NameNode20323 Jps

Slave-CentOS7

[root@slave hadoop]# jps18113 NodeManager18509 Jps17849 DataNode

Open http: // 192.168.1.182: 8088/http: // 192.168.1.182: 50070/in the browser/

Test Hadoop

Master-CentOS7

[root@master hadoop]# cd /usr/local/hadoop/[root@master hadoop]# bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar pi 10 10
Stop Service

Master-CentOS7

[root@master hadoop]# cd /usr/local/hadoop[root@master hadoop]# sbin/stop-all.sh
  • If copyFromLocal: Cannot create directory/123/. Name node is in safe mode is prompted, this is because the security mode is enabled.Solution: cd/usr/local/Hadoop bin/hdfs dfsadmin-safemode leave

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.