HDFS HA Series Experiment Three: Ha+nfs+zookeeper

Source: Internet
Author: User
Tags mkdir zookeeper zookeeper client
Due to the time relationship, the original plan on the Hadoop cluster2 implementation of Ha+nfs+zookeeper, changed to implement on the Hadoop cluster1, so that the SSH no password login configuration link and hadoop cluster configuration link. The configuration environment of this article is based on the HDFS HA series experiment Two: Ha+journalnode+zookeeper.
1: Schematic a:nn1, NN2 (or more NN nodes) only one is active state, This is achieved through the use of the Zkfailovercontroller component (zookeeper client) and the zookeeper cluster for all NN nodes to be detected and elected. B:active nn Editlog writes to the NFS shared directory/mnt/cluster1, Standby nn acquires/mnt/cluster1 through the shared directory Editlog and runs locally to keep the metadata synchronized with the Active NN. C: If you do not configure zookeeper, you can manually switch the active Nn/standby NN, if you want to configure zookeeper automatic switching, you also need to provide a switching method, that is, to configure the dfs.ha.fencing.methods parameters.
2:NFS Client Configuration starts product201, product202 NFS client and is set to automatically load on startup [root@product201/]# chkconfig rpcbind on [root@product201/]# Chkconfig nfslock on [root@product201/]# service rpcbind Restart [root@product201/]# service nfslock Restart [root@produ ct201/]# mkdir-p/mnt/cluster1 [root@product201/]# mkdir-p/mnt/cluster2 [root@product201/]# chown-r Hadoop:hadoop/ Mnt/cluster1 [root@product201/]# chown-r hadoop:hadoop/mnt/cluster2 [root@product201/]# mount-t NFS Productserver:/sh Are/cluster1/mnt/cluster1 [root@product201/]# mount-t NFS Productserver:/share/cluster2/mnt/cluster2 [ root@product201/]# echo "mount-t NFS Productserver:/share/cluster1/mnt/cluster1" >>/etc/rc.d/rc.local [ root@product201/]# echo "mount-t NFS Productserver:/share/cluster2/mnt/cluster2" >>/etc/rc.d/rc.local
3:hadoop to stop all Hadoop-related processes before configuring configuration, you can view and confirm A: Rebuild data Directory and log directory with JPS all Hadoop nodes are running [hadoop@product201 hadoop220]$ RM-RF MyData logs [hadoop@product201 hadoop220]$ mkdir MyData logs
B: Modify the configuration and issue it to each node [hadoop@product201 hadoop220]$ cd Etc/hadoop [hadoop@product201 hadoop]$ VI hdfs-site.xml [ hadoop@product201 hadoop]$ Cat Hdfs-site.xml

<property> <name>dfs.namenode.shared.edits.dir</name> <value>file:///mnt/cluster1</ value> <description> multiple Namenode share NFS directory. </description> </property> [hadoop@product201 hadoop]$ SCP hdfs-site.xml product202:/app/hadoop/   hadoop220/etc/hadoop/.   [hadoop@product201 hadoop]$ SCP hdfs-site.xml product203:/app/hadoop/hadoop220/etc/hadoop/.   [hadoop@product201 hadoop]$ SCP hdfs-site.xml product204:/app/hadoop/hadoop220/etc/hadoop/. [hadoop@product201 hadoop]$ CD. /..
4: Running Hadoop about Hadoop ha start flowchart See the experience of the HDFS Ha series experiment A: Start zookeeper [hadoop@product202 hadoop220]$/app/hadoop/zookeeper345/ bin/zkserver.sh start [hadoop@product203 hadoop220]$/app/hadoop/zookeeper345/bin/zkserver.sh start [ hadoop@product204 hadoop220]$/app/hadoop/zookeeper345/bin/zkserver.sh Start
B: Format namenode and register zookeeper lock [hadoop@product201 hadoop220]$ bin/hdfs Namenode-format [hadoop@product201 hadoop220]$ Bin /hdfs Zkfc-formatzk
C: Start nn1 [hadoop@product201 hadoop220]$ hadoop-daemon.sh start zkfc [hadoop@product201 hadoop220]$ sbin/ hadoop-daemon.sh Start Namenode
D: Start nn2, synchronize nn1 metadata information on nn2 [hadoop@product202 hadoop220]$ hadoop-daemon.sh start ZKFC [hadoop@product202 hadoop220]$ Bin /hdfs Namenode-bootstrapstandby [hadoop@product202 hadoop220]$ sbin/hadoop-daemon.sh start Namenode
E: Start Datanode [hadoop@product201 hadoop220]$ sbin/hadoop-daemons.sh start Datanode

TIPS: Related Hadoop configuration file downloads (ha+nfs+zookeeper)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.