Zookeeper cluster installation, configuration, high-availability testing

Source: Internet
Author: User

Dubbo Registration Center Cluster Zookeeper-3.4.6

Dubbo recommends using zookeeper as the registry for the service.

As long as more than half of the nodes in the zookeeper cluster are normal, then the entire cluster is available externally. It is based on this feature that the number of nodes in the ZK cluster should be odd (2n+1:3, 5, 7 nodes) is more appropriate.

ZooKeeper architecture diagram with Dubbo service cluster

Server 1:192.168.1.81 port: 2181, 2881, 3881

Server 2:192.168.1.82 port: 2182, 2882, 3882

Server 3:192.168.1.83 port: 2183, 2883, 3883

1, modify the operating system/etc/hosts file, add IP and host name Mapping:

# Zookeeper Cluster servers

192.168.1.81 edu-zk-01

192.168.1.82 edu-zk-02

192.168.1.83 edu-zk-03

2. Download or upload zookeeper-3.4.6.tar.gz to/home/wusc/zookeeper directory:

$ cd/home/wusc/zookeeper

$ wget http://apache.fayea.com/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

3. Unzip the zookeeper installation package and rename the Zookeeper directory by node number:

$ TAR-ZXVF zookeeper-3.4.6.tar.gz

Server 1:

$ mv zookeeper-3.4.6 node-01

Server 2:

$ mv zookeeper-3.4.6 node-02

Server 3:

$ mv zookeeper-3.4.6 node-03

4. Create the following directory under each zookeeper node directory:

$ cd/home/wusc/zookeeper/node-0x (X for node number 1, 2, 3, with the same solution)

$ mkdir Data

$ mkdir Logs

5, a copy of the Zoo_sample.cfg file under the zookeeper/node-0x/conf directory, named Zoo.cfg:

$ CP zoo_sample.cfg Zoo.cfg

6. Modify the Zoo.cfg configuration file:

zookeeper/node-01 the configuration (/HOME/WUSC/ZOOKEEPER/NODE-01/CONF/ZOO.CFG) is as follows:

ticktime=2000

initlimit=10

Synclimit=5

Datadir=/home/wusc/zookeeper/node-01/data

Datalogdir=/home/wusc/zookeeper/node-01/logs

clientport=2181

server.1=edu-zk-01:2881:3881

server.2=edu-zk-02:2882:3882

server.3=edu-zk-03:2883:3883

zookeeper/node-02 the configuration (/HOME/WUSC/ZOOKEEPER/NODE-02/CONF/ZOO.CFG) is as follows:

ticktime=2000

initlimit=10

Synclimit=5

Datadir=/home/wusc/zookeeper/node-02/data

Datalogdir=/home/wusc/zookeeper/node-02/logs

clientport=2182

server.1=edu-zk-01:2881:3881

server.2=edu-zk-02:2882:3882

server.3=edu-zk-03:2883:3883

zookeeper/node-03 the configuration (/HOME/WUSC/ZOOKEEPER/NODE-03/CONF/ZOO.CFG) is as follows:

ticktime=2000

initlimit=10

Synclimit=5

Datadir=/home/wusc/zookeeper/node-03/data

Datalogdir=/home/wusc/zookeeper/node-03/logs

clientport=2183

server.1=edu-zk-01:2881:3881

server.2=edu-zk-02:2882:3882

server.3=edu-zk-03:2883:3883

Parameter description:

ticktime=2000

Ticktime this time is the time interval between the zookeeper server or between the client and the server to maintain the heartbeat, that is, each ticktime time sends a heartbeat.

initlimit=10

Initlimit This configuration item is used to configure the Zookeeper accept client (the client here is not the client that connects the zookeeper server, but the leader that is connected to follower in the Zookeeper server cluster) Server) The maximum number of heartbeat intervals that can be tolerated when a connection is initialized. The client connection failed when the Zookeeper server has not received the return information of the client after 10 heartbeats (that is, ticktime) length. The total length of time is 10*2000=20 seconds.

Synclimit=5

Synclimit This configuration item identifies the length of time that a message is sent between leader and follower, the duration of the request and response, and the maximum number of ticktime, the total length of time is 5*2000=10 seconds.

Datadir=/home/wusc/zookeeper/node-01/data

DataDir as the name implies is zookeeper to save the data directory, by default zookeeper will write the data log file is also stored in this directory.

clientport=2181

ClientPort This port is the port that the client (application) connects to the zookeeper server, and zookeeper listens to this port to accept requests for client access.

Server. A=b:c:d

server.1=edu-zk-01:2881:3881

server.2=edu-zk-02:2882:3882

server.3=edu-zk-03:2883:3883

A is a number that indicates this is the first server;

b is the IP address of the server (or the host name mapped with the IP address);

c The first port is used for the information exchange of the cluster members, indicating that the server and the leader server in the cluster exchange information ports;

D is the port used exclusively for election leader when the leader is hung out.

Note: If you are configuring a pseudo-cluster, different Zookeeper instance communication port numbers cannot be the same, so assign them a different port number.

7. Create the myID file under Datadir=/home/wusc/zookeeper/node-0x/data

Edit the myID file and enter the corresponding number on the corresponding IP machine. As in node-01, myID file content is 1,node-02 on the 2,node-03 is 3:

$ Vi/home/wusc/zookeeper/node-01/data/myid # # Value of 1

$ Vi/home/wusc/zookeeper/node-02/data/myid # # Value of 2

$ Vi/home/wusc/zookeeper/node-03/data/myid # # Value of 3

8, open in the firewall to use the port 218X, 288X, 388X

Switch to root user rights and execute the following command:

# chkconfig Iptables on

# service Iptables Start

Edit/etc/sysconfig/iptables

# Vi/etc/sysconfig/iptables

For example, add the following 3 lines to server 01:

# # Zookeeper

-A input-m state--state new-m tcp-p TCP--dport 2181-j ACCEPT

-A input-m state--state new-m tcp-p TCP--dport 2881-j ACCEPT

-A input-m state--state new-m tcp-p TCP--dport 3881-j ACCEPT

To restart the firewall:

# Service Iptables Restart

To view the firewall port status:

# Service Iptables Status

9, start and test zookeeper (to use WUSC user start, do not use root):

(1) Use the WUSC user to the/home/wusc/zookeeper/node-0x/bin directory to execute:

$/home/wusc/zookeeper/node-01/bin/zkserver.sh Start

$/home/wusc/zookeeper/node-02/bin/zkserver.sh Start

$/home/wusc/zookeeper/node-03/bin/zkserver.sh Start

(2) Enter the JPS command to view the process:

$ JPS

1456 Quorumpeermain

Among them, Quorumpeermain is the zookeeper process, stating that startup is normal

(3) View status:

$/home/wusc/zookeeper/node-01/bin/zkserver.sh Status

(4) View Zookeeper service output information:

Because the service information output file is in/home/wusc/zookeeper/node-0x/bin/zookeeper.out

$ tail-500f Zookeeper.out

10. Stop the Zookeeper process:

$ zkserver.sh Stop

11. Configure zookeeper Boot using WUSC User:

Edit the/etc/rc.local files in node-01, node-02, node-03, respectively, by adding:

Su-wusc-c '/home/wusc/zookeeper/node-01/bin/zkserver.sh start '

Su-wusc-c '/home/wusc/zookeeper/node-02/bin/zkserver.sh start '

Su-wusc-c '/home/wusc/zookeeper/node-03/bin/zkserver.sh start '

Second, the installation of the Dubbo Control console (the basic article has said, here focus on how the control console link cluster):

Dubbo Control console can be registered to the Zookeeper Registry Service or service consumers to manage, but the control console is normal to the Dubbo service has no impact, the control console also does not need to be highly available, so you can deploy a single node.

ip:192.168.1.81

Deployment container: TOMCAT7

Port: 8080

1, download (or upload) the latest version of TOMCAT7 (apache-tomcat-7.0.57.tar.gz) to/home/wusc/

2, Decompression:

$ TAR-ZXVF apache-tomcat-7.0.57.tar.gz

$ mv apache-tomcat-7.0.57 Dubbo-admin-tomcat

3. Remove all files from the/home/wusc/dubbo-admin-tomcat/webapps directory:

$ RM-RF *

4, Upload Dubbo Management Console program Dubbo-admin-2.5.3.war

To/home/wusc/dubbo-admin-tomcat/webapps

5. Unzip and name the directory root:

$ unzip dubbo-admin-2.5.3.war-d ROOT

Move the Dubbo-admin-2.5.3.war to the/home/wusc/tools directory backup

$ MV Dubbo-admin-2.5.3.war/home/wusc/tools

6, Configuration dubbo.properties:

$ VI root/web-inf/dubbo.properties

dubbo.registry.address= zookeeper://192.168.1.81:2181?backup=192.168.1.82:2182,192.168.1.83:2183

Dubbo.admin.root.password=wusc.123

Dubbo.admin.guest.password=wusc.123

(The above password before the formal production to be modified)

7, firewall open 8080 port, with the root user to modify the/etc/sysconfig/iptables,

# Vi/etc/sysconfig/iptables

Increase:

# # dubbo-admin-tomcat:8080

-A input-m state--state new-m tcp-p TCP--dport 8080-j ACCEPT

To restart the firewall:

# Service Iptables Restart

8. Start TOMAT7

$/home/wusc/dubbo-admin-tomcat/bin/startup.sh

9. Browse http://192.168.1.81:8080/

10. Configure the Tomcat boot to deploy the Dubbo control console:

To edit the/etc/rc.local file in the virtual host, add:

Su-wusc-c '/home/wusc/dubbo-admin-tomcat/bin/startup.sh '

11. Apply the tests that are linked to the registration center cluster.

12, the registration center high-availability cluster test.
This blog is from the course document in the video tutorial on Dubbo Distributed System architecture: www.roncoo.com

Zookeeper cluster installation, configuration, high-availability testing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.