ResourceManager ha construction of zookeeper+hadoop2.6.0

Source: Internet
Author: User
Tags failover wildfly hadoop fs

The following is my action record, not yet organized in the format


hqvm-l118 192.168.1.118 jdk, Hadoop NameNode, Dfszkfailovercontroller (ZKFC)
hqvm-l138 192.168.1.138 JDK, Hadoop, zookeeper NameNode, Dfszkfailovercontroller (ZKFC), DataNode, Nodemana GER, Journalnode, Quorumpeermain
hqvm-l144 192.168.1.144 JDK, Hadoop, zookeeper ResourceManager, DataNode, NodeManager, Journalnode, Quorump Eermain
hqvm-l174 192.168.1.174 JDK, Hadoop, zookeeper ResourceManager, DataNode, NodeManager, Journalnode, Quorump Eermain


--View Current operating system
Cat/proc/version
Linux version 2.6.32-431.el6.x86_64 ([email protected]) (GCC version 4.4.7 20120313 (Red Hat 4.4.7-4) (gcc)) #1 SMP Fri N OV 03:15:09 UTC 2013


--Each new Hadoop user
Useradd Hadoop
passwd Hadoop
Usermod-g APPL Hadoop


--Each configuration Java
Switch to Hadoop
VI. bash_profile


Java_home= "/opt/appl/wildfly/jdk1.7.0_72"
Hadoop_home= "/home/hadoop/hadoop-2.4.1"
Jre_home= $JAVA _home/jre
Path= $JAVA _home/bin: $JRE _home/bin: $PATH: $HOME/bin
Export Jre_home
Export Java_home
Export PATH


Exit re-login, java-version to see if the configuration was successful
--Configuration hostname
Add the following hostname to each root user vi/etc/hosts
172.30.0.118 hqvm-l118
172.30.0.138 hqvm-l138
172.30.0.144 hqvm-l144
172.30.0.174 hqvm-l174
--sudo/etc/init.d/networking restart


--Configure SSH
First 172.30.0.118 on the machine


Cd
Ssh-keygen-t Dsa-p "-F ~/.SSH/ID_DSA
Cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
chmod ~/.ssh/authorized_keys
Vi/etc/ssh/sshd_config
Speak the following comment open
Rsaauthentication Yes # Enable RSA authentication
Pubkeyauthentication Yes # Enable public key private key pairing authentication method
Authorizedkeysfile. Ssh/authorized_keys # Public key file path (same as the file generated above)
Service sshd Restart
Distributing 118 of public keys
Id_dsa.pub public key is sent to 138 on 172.30.0.118
SCP id_dsa.pub [Email protected]:~/
On the 138
Cat ~/id_dsa.pub >> ~/.ssh/authorized_keys
SSH 172.30.0.138 successful upon completion on 118
As the previous steps to configure SSH login as required, the four I fully equipped with 22 can access each other


--Copy hadoop-2.6.0.tar.gz and Zookeeper3.4.6 to/home/hadoop




--Installation Zookeeper
On the hqvm-l138.
TAR-ZXVF zookeeper-3.4.6.tar.gz
MV Zookeeper-3.4.6/zookeeper
CD zookeeper/conf/
CP Zoo_sample.cfg Zoo.cfg
VI zoo.cfg
Modify Datadir=/home/hadoop/zookeeper/zkdata
Last Added
server.1=hqvm-l138:2888:3888
server.2=hqvm-l144:2888:3888
server.3=hqvm-l174:2888:3888
Save exit
Mkdir/home/hadoop/zookeeper/zkdata
Touch/home/hadoop/zookeeper/zkdata/myid
Echo 1 >/home/hadoop/zookeeper/zkdata/myid
Scp-r/home/hadoop/zookeeper/hqvm-l144:/home/hadoop/
Scp-r/home/hadoop/zookeeper/hqvm-l174:/home/hadoop/
144: Echo 2 >/home/hadoop/zookeeper/zkdata/myid
174: Echo 3 >/home/hadoop/zookeeper/zkdata/myid


--Install Hadoop
118 on
TAR-ZXVF hadoop-2.6.0.tar.gz
VI. bash_profile
Add to
hadoop_home=/home/hadoop/hadoop-2.6.0
Path= $JAVA _home/bin: $JRE _home/bin: $HADOOP _home/bin: $PATH: $HOME/bin
Export Hadoop_home


To modify a Hadoop configuration file
CD hadoop-2.6.0/etc/hadoop/


VI hadoop-env.sh
java_home=/opt/appl/wildfly/jdk1.7.0_72


VI Core-site.xml
Add to
<configuration>
<!--specify HDFs Nameservice to Masters
<property>
<name>fs.defaultFS</name>
<value>hdfs://masters</value>
</property>
<!--Specify Hadoop temp directory--
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-2.6.0/tmp</value>
</property>
<!--Specify Zookeeper address--
<property>
<name>ha.zookeeper.quorum</name>
<value>hqvm-L138:2181,hqvm-L144:2181,hqvm-L174:2181</value>
</property>
</configuration>


VI Hdfs-site.xml
<configuration>
<!--Specifies that HDFs Nameservice is masters and needs to be consistent with the Core-site.xml-
<property>
<name>dfs.nameservices</name>
<value>masters</value>
</property>
<!--masters Below are two Namenode, hqvm-l118,hqvm-l138 and
<property>
<name>dfs.ha.namenodes.masters</name>
<value>hqvm-L118,hqvm-L138</value>
</property>
<!--hqvm-l118 RPC communication address--
<property>
<name>dfs.namenode.rpc-address.masters.hqvm-L118</name>
<value>hqvm-L118:9000</value>
</property>
<!--hqvm-l118 HTTP communication address--
<property>
<name>dfs.namenode.http-address.masters.hqvm-L118</name>
<value>hqvm-L118:50070</value>
</property>
<!--hqvm-l138 RPC communication address--
<property>
<name>dfs.namenode.rpc-address.masters.hqvm-L138</name>
<value>hqvm-L138:9000</value>
</property>
<!--hqvm-l138 HTTP communication address--
<property>
<name>dfs.namenode.http-address.masters.hqvm-L138</name>
<value>hqvm-L138:50070</value>
</property>
<!--specify where the Namenode metadata is stored on the Journalnode-
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hqvm-L138:8485;hqvm-L144:8485;hqvm-L174:8485/masters</value>
</property>
<!--specify where the Journalnode resides on the local disk--
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/hadoop-2.6.0/journal</value>
</property>
<!--turn on Namenode fail auto-switch
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!--configuration Failed auto-switch implementation mode--
<property>
<name>dfs.client.failover.proxy.provider.masters</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!--Configure the isolation mechanism method, multiple mechanisms are split by line breaks, that is, each mechanism take up one line--
<property>
<name>dfs.ha.fencing.methods</name>
<value>
Sshfence
Shell (/bin/true)
</value>
</property>
<!--requires SSH-free access when using the Sshfence isolation mechanism
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<!--Configuring the Sshfence isolation mechanism time-out time-
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>








CP Mapred-site.xml.template Mapred-site.xml
VI Mapred-site.xml
<configuration>
<!--Specifies that the MR Frame is yarn-
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>


VI Yarn-site.xml


<configuration>
<!--Open RM high Reliable--
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!--the cluster ID of the specified RM--
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>RM_HA_ID</value>
</property>
<!--Specify the name of RM--
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!--Specify the address of RM separately--
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>hqvm-L144</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>hqvm-L174</value>
</property>
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>

<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<!--specify ZK cluster Address--
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>hqvm-L138:2181,hqvm-L144:2181,hqvm-L174:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>




VI Slaves
hqvm-l138
hqvm-l144
hqvm-l174




Scp-r/home/hadoop/hadoop-2.6.0/hqvm-l138:/home/hadoop/
Scp-r/home/hadoop/hadoop-2.6.0/hqvm-l144:/home/hadoop/
Scp-r/home/hadoop/hadoop-2.6.0/hqvm-l174:/home/hadoop/


--Start the Zookeeper cluster, respectively, on the hqvm-l138,hqvm-l144,hqvm-l174
Cd/home/hadoop/zookeeper/bin
./zkserver.sh Start
./zkserver.sh Status View state
--Start Journalnode, respectively, on the hqvm-l138,hqvm-l144,hqvm-l174
cd/home/hadoop/hadoop-2.6.0/
sbin/hadoop-daemon.sh Start Journalnode
Run JPS command test, hqvm-l138,hqvm-l144,hqvm-l174 more Journalnode process
--Format HDFs


On the 118
HDFs Namenode-format
Scp-r/home/hadoop/hadoop-2.6.0/tmp/hqvm-l138:/home/hadoop/hadoop-2.6.0/


--formatted ZK, on 118
HDFs Zkfc-formatzk
--Start HDFs, on 118
sbin/start-dfs.sh
Use JPS to see if each node has been started


--Start yarn
On hqvm-l144, Namenode and ResourceManager are separated because of performance problems, because they all have to occupy a lot of resources and separate on different machines.
/home/hadoop/hadoop-2.6.0/sbin/start-yarn.sh
On hqvm-l174, start resourcemanager,/home/hadoop/hadoop-2.6.0/sbin/yarn-daemon.sh start ResourceManager


This configuration is complete and can be accessed by a browser




Main Namenode
http://172.30.0.118:50070
Prepared Namenode
http://172.30.0.138:50070


--Verifying HDFs HA
First upload a file to HDFs
Hadoop fs-put/etc/profile/profile
Then kill the active Namenode, use JPS to view the PID or Ps-ef|grep Hadoop
Discover that http://172.30.0.118:50070/cannot be accessed, http://172.30.0.138:50070/becomes active
Discover Hadoop fs-ls/Still available


Manually start the namenode118 that hangs out.
/home/hadoop/hadoop-2.6.0/sbin/hadoop-daemon.sh Start Namenode
Found at this time http://172.30.0.118:50070/can be visited, for standby






--Verify yarn:
Run the WordCount program in the demo provided by Hadoop:
Hadoop Jar/home/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar Wordcount/profile /out


Hadoop ha Cluster setup completed
http://172.30.0.144:8088
http://172.30.0.174:8088

ResourceManager ha construction of zookeeper+hadoop2.6.0

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.