Start Hadoop HA Hbase zookeeper Spark

Source: Internet
Author: User

Note: My public key file is not under/root/hxsyl under/home/hxsyl/.ssh,
Find/-name Id_rsa

Find


1.
Run the command on each machine zkserver.sh start or run in the $zookeeper_home/bin directory./zkserver.sh start command. You can then use the command JPS to view zookeeper-initiated process Quorumpeermain.
The zookeeper status can be viewed through the zkserver.sh status command. Normally there is only one leader in the machine, and the others are follow.
2. Master node execution
HDFs Zkfc-formatzk

Note: The final ZK is capitalized, otherwise

6/11/30 20:31:45 FATAL Tools. Dfszkfailovercontroller:got a fatal error, exiting Noworg.apache.hadoop.HadoopIllegalArgumentException:Bad argument: -formatzk

  



He initializes it based on the value Ha.zookeeper.quorum in the $hadoop_home/etc/hadoop/core-site.xml file.


This needs to be sure that the automatic ha switch is turned on or not.
But now
Dfszkfailovercontroller does not start, start ZKFC only then can, then is an active, a standby,


3.
Start Journalnode
or in Mster execution
Start Journalnode
 Note the second boot method the master node is not journalnode and needs to be started separately, it's worth the time to stop alone 
4. on [nn1], format it, and start: bin/hdfs namenode-< Span class= "Hljs-keyword" >formatsbin/hadoop-daemon.sh start namenode5. On [NN2], Synchronize nn1 metadata information: bin/hdfs Namenode-bootstrapstandby 6. start [nn2]:sbin/hadoop-daemon.sh start Namenode after the above four steps, NN1 and nn2 are processed standby status 7. switch [NN1] to active

There's a problem with how to configure automatic switching, this does not, forcing manual brain problems ....
Bin/hdfs haadmin-transitiontoactive nn18. Start Datanode
9. Start yarn
sbin/start-yarn.sh

More ResourceManager process on Master1, slave3 process on slave1 slave2 NodeManager

10. Start ZKFC
sbin/hadoop-daemons.sh Start ZKFC

Note that this and the above boot Journalnode specifications, Damons does not start the master ZKFC, need to be started separately.

11.

sbin/mr-jobhistory-daemon.sh start Historyserver     Original configuration on the stand, I changed to Centosmaster.

11. Start HBase

bin/start-hbase.sh

Close the Hadoop cluster: On [NN1], enter the following command sbin/stop-dfs.sh does not stop the journalnode of the above yarn and history Server master node


进入spark的sbin目录下
start-all.sh ./start-history-server.shBin/spark-shell

12. Close
A.zookeeper
bin/zkserver.sh stop



Start Hadoop HA Hbase zookeeper Spark

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.