Zookeeper distributed cluster deployment and problems

Source: Internet
Author: User
Tags file size socket zookeeper centos log4j

Zookeeper provides high-performance services for distributed applications and is widely used in many common cluster services, most commonly in hbase clusters, as well as in SOLR clusters, and HA automatic failover in Hadoop-2. This article mainly describes the process of deploying zookeeper clusters for hbase clusters, and explains the problems encountered during deployment.

By default, the start and stop of zookeeper is managed by HBASE, and if you want to modify this default behavior, you need to change the export hbase_manages_zk=true in hbase-env.sh to export hbase_manages_zk= False and start the zookeeper cluster before starting hbase. Copy Zoo_sample.cfg to Zoo.cfg in ${zookeeper_home}/conf, and modify DataDir as the directory for saving ZOOKEEPER data, default to/tmp/zookeeper. Then add the server for the cluster in the format: server.1=hostname1: Connection port: The election port, where 1 is the value in the myID file. For example, for a zookeeper cluster with three nodes, the sample configuration might be:

server.1=centos-1:2888:3888
server.2=centos-2:2888:3888
server.3=centos-3:2888:3888
For ZOOKEEPER cluster configuration, you need to create a myID file under ${zookeeper_home}/${datadir} before starting the cluster. The contents of the file correspond to the x in server.x, such as myID in the CentOS-1 in the example above, the content must be 1. The contents of myID files on CentOS-2 and CentOS-3 are 2 and 3 respectively. If you do not create a myID file before the cluster starts, it will report the following error at startup:
2015-07-03 15:37:40,877 [myID:]-ERROR [main:quorumpeermain@85]-Invalid config, exiting abnormally
Org.apache.zookeeper.server.quorum.quorumpeerconfig$configexception:error processing/home/search/ zookeeper-3.4.6/bin/. /conf/zoo.cfg at
org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse (quorumpeerconfig.java:123)
At Org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun (quorumpeermain.java:101) at
Org.apache.zookeeper.server.quorum.QuorumPeerMain.main (quorumpeermain.java:78)
caused by: Java.lang.IllegalArgumentException:/hdata/zookeeper/myid file is missing at
Org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties (quorumpeerconfig.java:350) at
Org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse (quorumpeerconfig.java:119) ...
2 more
Invalid config, exiting abnormally

After each ZOOKEEPER node modifies the ZOOKEEPER, the ZOOKEEPER does not have start-hbase or start-dfs a similar script to start all nodes, so each server must be executed separately ${zookeeper_ home}/bin/zkserver.sh start Zookeeper. After the boot is complete, you can perform the ${zookeeper_home}/bin/zkserver.sh status check state, for example:

JMX enabled by default
Using config:/application/search/zookeeper/bin/. /conf/zoo.cfg
Mode:follower

After the zookeeper cluster successfully starts, you need to modify the parameter Hbase.zookeeper.quorum in the Hbase-site.xml so that its value contains all zookeeper nodes, such as centos-1,centos-2,centos-3, separated by commas.

For a single-node zookeeper, you do not need to set up server-related information in profile zoo.cfg, that is, you do not need to set up server.x, so you do not need to create a corresponding myID file. For single-node zookeeper there is also a problem to note, the problem belongs to zookeeper itself a bug, but has been in version 3.4.7, 3.5.2, 3.6.0 fix, specifically refer to ZOOKEEPER-832, the exception information as follows:

2015-07-22 13:00:23,286 [myID:]-INFO  [nioservercxn.factory:0.0.0.0/0.0.0.0:2181:nioservercnxnfactory@213]- Accepted Socket Connection from/10.10.32.223:15489
2015-07-22 13:00:23,286 [myID:]-INFO  [ NIOSERVERCXN.FACTORY:0.0.0.0/0.0.0.0:2181:ZOOKEEPERSERVER@811]-refusing session request for client/ 10.10.32.223:15489 as it has seen Zxid 0x210d711 we last ZXID was 0X26CA client must try another server
2015-07-22 13: 00:23,287 [myID:]-INFO  [nioservercxn.factory:0.0.0.0/0.0.0.0:2181:nioservercnxn@1000]-Closed socket Connection for client/10.10.32.223:15489 (no session established for client)

This problem is caused by modifying the zookeeper ${datadir} directory to a new directory or deleting a file in the directory.

The issue of log output was also encountered when deploying zookeeper, and when no log-related modifications were made, The zookeeper log will be output to a single zookeeper.out file, and if it runs for a long time, it will cause the file to be very large and will be very inconvenient to view, so you need to modify the zkenv.sh, log4j.properties. Make the following changes to Zkenv.sh, specify the directory to save the log, the level of the log output, and the location of the output:

if ["x${zoo_log_dir}" = "x"]
then
    #ZOO_LOG_DIR = "." #默认值为当前目录
    zoo_log_dir= "/hdata/log/zookeeper/"
Fi
if ["x${zoo_log4j_prop}" = "x"]
then
    #ZOO_LOG4J_PROP = "Info,console"
    zoo_log4j_prop= "INFO, Rollingfile "
fi

Then modify the Log4j.properties file, the comment part is the original value, the new part replaces the original value:

#zookeeper. Root.logger=info, CONSOLE
zookeeper.root.logger=info. Rollingfile #日志输出的级别及输出的地方
#log4j. Appender.rollingfile=org.apache.log4j.rollingfileappender
Log4j.appender.rollingfile=org.apache.log4j.dailyrollingfileappender #日志文件的appender
# Max log file size of 10mb< c14/> #log4j. APPENDER.ROLLINGFILE.MAXFILESIZE=10MB #禁用该属性
# Uncomment the next line to limit number of backup files
   #log4j. appender.rollingfile.maxbackupindex=10

After modifying the above section, zookeeper output the log to ${zoo_log_dir}/zookeeper.log, generating a new log file every day and renaming the previous file to Zookeeper.log.2015-07-22. Even if the above modification will still generate zookeeper.out, although the contents of the file is empty, but for users who do not like the file, you need to modify the zkserver.sh file, replace the following line as follows, so that there is no zookeeper.out file in the log directory.

Nohup "$JAVA" "-dzookeeper.log.dir=${zoo_log_dir}" "-dzookeeper.root.logger=${zoo_log4j_prop}" \
-CP "$ CLASSPATH "$JVMFLAGS $ZOOMAIN" $ZOOCFG ">" $_zoo_daemon_out "2>&1 </dev/null &

nohup $JAVA"-dzooke Eper.log.dir=${zoo_log_dir} ""-dzookeeper.root.logger=${zoo_log4j_prop} "\
    -cp" $CLASSPATH "$JVMFLAGS $ZOOMAIN" $ZOOCFG ">/dev/null 2>&1 </dev/null &

This article briefly describes the deployment and installation of the zookeeper cluster, analyzes the problems encountered during the installation process, and finally explains how to modify the default log configuration of the zookeeper so that the logs are output to the specified directories and files.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.