Original link: Zookeeper in the actual combat of the single cluster mode
The previous article introduces the installation and application of zookeeper stand-alone mode, but zookeeper is designed to solve distributed application scenarios, so it is usually run in cluster mode. Today we are planning to deploy three zookeeper services on a single machine to form a zookeeper cluster due to a shortage of machines at hand. Here, unzip the zookeeper installation package to the/OPT directory, which uses three directories to represent three zookeeper instances,/opt/zookeeper1,/opt/zookeeper2 and/opt/zookeeper3 respectively.
1. First edit the Conf/zoo.cfg file in each zookeeper directory. The contents of the three configuration profiles are as follows
$ cat/opt/zookeeper1/conf/zoo.cfg
ticktime=2000
datadir=/opt/zookeeper1/data
clientport=2181
initlimit=10
synclimit=5
server.1=127.0.0.1:2881:3881
server.2=127.0.0.1:2882:3882
server.3= 127.0.0.1:2883:3883
$ cat/opt/zookeeper2/conf/zoo.cfg
ticktime=2000
datadir=/opt/zookeeper2/data
clientport=2182
initlimit=10
synclimit=5
server.1=127.0.0.1:2881:3881
server.2=127.0.0.1:2882:3882
server.3=127.0.0.1:2883:3883
$ cat/opt/zookeeper3/conf/zoo.cfg
ticktime=2000
datadir=/opt/zookeeper3/data
clientport=2183
initlimit=10
synclimit=5
server.1=127.0.0.1:2881:3881
server.2=127.0.0.1:2882:3882
server.3= 127.0.0.1:2883:3883
There are a few points to be aware of
* DataDir: Three zookeeper instances of the DataDir directory to distinguish between, here are assigned to each zookeeper instance directory of the data directory.
* ClientPort: Defines the port used by the zookeeper client to connect to the zookeeper server, where the three-instance port is differentiated because it is a cluster made on a single machine.
* Server.: Defines the IP and port for each instance of the zookeeper cluster, because it is a cluster on a single machine, so IP is defined as 127.0.0.1, but the following ports should be separated.
2. Create the data directory and instance ID files
Mkdir/opt/zookeeper1/data
mkdir/opt/zookeeper2/data
mkdir/opt/zookeeper3/data
echo 1 >/opt/ Zookeeper1/data/myid
echo 2 >/opt/zookeeper2/data/myid
echo 3 >/opt/zookeeper3/data/myid
Note that you need to create a myID file in each zookeeper DataDir directory, which records the instance IDs of each zookeeper.
3. Start Zookeeper Service
Enter each zookeeper's Bin directory, and then run "./zkserver.sh start" to start a zookeeper service.
4. Client Connection
Go to a Zookeeper Bin directory and run the following command to connect the zookeeper service separately.
./zkcli.sh-server 127.0.0.1:2181
./zkcli.sh-server 127.0.0.1:2182
./zkcli.sh-server 127.0.0.1:2183
Create a Znode node on one of the client
Create/mykey myvalue
Then view the newly created Zonde node on the other client
Get/mykey
5. View Zookeeper Status
After starting zookeeper, because zookeeper will have a set of leader election algorithm, so if you want to know that zookeeper is leader can run in each zookeeper Bin directory./zkserver.sh Status command to view the.
If it's leader,
$./zkserver.sh status
JMX enabled by default
Using config:/opt/zookeeper1/bin/. /conf/zoo.cfg
Mode:leader
If it's not leader,
$./zkserver.sh status
JMX enabled by default
Using config:/opt/zookeeper3/bin/. /conf/zoo.cfg
Mode:follower
At this point, you can stop the leader node and then look at the other two zookeeper instances, at which point the remaining two zookeeper instances will elect a leader.