hadoop2.7 configuring Ha, using ZK and Journal

Source: Internet
Author: User
Tags failover

This article uses the premise: from Noha to ha

ZK effect: Maintaining a shared lock ensures that only one active nn

Journal: Synchronizing meta data between two NN

Machine allocation:

Nn1 Namenode,dfszkfailovercontroller
Nn2 Namenode,dfszkfailovercontroller
Slave1 Datanode,zookeeper,journalnode
Slave2 Datanode,zookeeper,journalnode
Slave3 Datanode,zookeeper,journalnode


1, configure Core-site.xml, add ZK

<property>       <name>ha.zookeeper.quorum</name>       <value>slave1:2181,slave2:2181, Slave3:2181</value></property>

2, Configuration Hdfs-site.xml

<!--Specifies that HDFs Nameservice is masters and needs to be consistent with core-site.xml--<property> <name>dfs.name Services</name> <value>masters</value> </property> <!--Masters Below are two               A namenode, respectively nn1,nn2--> <property> <name>dfs.ha.namenodes.masters</name> <value>nn1,nn2</value> </property> <!--master RPC communication address--<propert Y> <name>dfs.namenode.rpc-address.masters.nn1</name> <value>nn1:9000</ Value> </property> <!--Master HTTP communication address---<property> <name&gt ;d fs.namenode.http-address.masters.nn1</name> <value>nn1:50070</value> </propert y> <!--nn2 RPC communication address--<property> <name>dfs.namenode.rpc-address.masters       .nn2</name>        <value>nn2:9000</value> </property> <!--nn2 HTTP communication address--<propert Y> <name>dfs.namenode.http-address.masters.nn2</name> <value>nn2:50070&lt               ;/value> </property> <!--specify where Namenode's metadata is stored on the Journalnode-<property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://slave1:8485;slave2:84 85;slave3:8485/masters</value> </property> <!--specify Journalnode location on local disk--&LT;PR Operty> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoop/journa L</value> </property> <!--turn on Namenode fail auto-Switch--<property> < Name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property > <!--configuration failedAutomatic switching implementation--<property> <name>dfs.client.failover.proxy.provider.masters</name>        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!--Configure the isolation mechanism method, multiple mechanisms are split by line breaks, that is, each mechanism take up one line--<property> <name>dfs. ha.fencing.methods</name> <value> sshfence Shell ( /bin/true) </value> </property> <!--requires ssh-free access when using the Sshfence isolation mechanism--&L T;property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/hom               E/.ssh/id_rsa</value> </property> <!--Configuring the Sshfence isolation mechanism time-out time-<property>        <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property>

3, modify the Yarn-site.xml, open the RM ha

<configuration> <!--Open RM high Reliable--<property> <name>yarn.resourcemanage R.ha.enabled</name> <value>true</value> </property> <!--designated RM Clus                ter ID-<property> <name>yarn.resourcemanager.cluster-id</name>                <value>RM_HA_ID</value> </property> <!--Specify the name of RM--<property>        <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!--Specify the address of RM--<property> <name>yarn.resourcemanage                 r.hostname.rm1</name> <value>nn1</value> </property> <property>        <name>yarn.resourcemanager.hostname.rm2</name> <value>nn2</value> </property>       <property> <name>yarn.resourcemanager.recovery.enabled</name> <v alue>true</value> </property> <property> <name>yarn.resou Rcemanager.store.class</name> &LT;VALUE&GT;ORG.APACHE.HADOOP.YARN.SERVER.RESOURCEMANAGER.RECOVERY.ZKR Mstatestore</value> </property> <!--specify ZK cluster Address---<property> & Lt;name>yarn.resourcemanager.zk-address</name> <value>slave1:2181,slave2:2181,slave3:2181&lt ;/value> </property> <property> <name>yarn.nodemanager.aux-services</ Name> <value>mapreduce_shuffle</value> </property></configuration>

4. Starting Step

(1) Start ZK

(2) Start Journalnode Hadoop-daemon.shstart Journalnode

(3) format HDFs HDFs namenode–format; Copy the formatted metadata to the NN2 corresponding directory

(4) formatted ZK HDFS ZKFC–FORMATZK

(5) Start HDFs sbin/start-dfs.sh

(6) Start yarn sbin/start-yarn.sh; see if ResourceManager is up, Yarn-daemon.shstart ResourceManager

Validation Hdfsha:

Kill-9 <pid of nn> on Nn1 , Nn2 becomes active

then restart Nn1 sbin/hadoop-daemon.sh start Namenode


Note:

Manually switch Namenode:./hdfs haadmin-transitiontoactive--forcemanual nn1

Manually switch Rm:yarn rmadmin-transitiontoactive--forcemanual r1


Problem:

At the moment, the RM ha does not work and is temporarily out of tune



Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

hadoop2.7 configuring Ha, using ZK and Journal

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.