Centos7 zookeeper single-host/cluster installation and startup, centos7zookeeper
ZooKeeper is a distributed, open-source distributed application Coordination Service. It is an open-source implementation of Google's Chubby and an important component of Hadoop and Hbase. It is a software that provides consistency services for distributed applications. It provides functions such as configuration maintenance, Domain Name Service, distributed synchronization, and group service. The basic operation process of ZooKeeper: 1. Election Leader. 2. synchronize data. 3. There are many algorithms in the election Leader process, but the election standards to be met are consistent. 4. The Leader must have the highest execution ID, similar to the root permission. 5. Most machines in the cluster receive the response and select the Leader as follow. Currently, the Registration Center recommended by Alibaba's open-source dubbo distributed service framework.
Install
Next we will go to the zookeeper installation stage. The zookeeper installation is relatively simple. Here I have prepared three linux virtual machines to simulate the cluster environment, qualified personnel can install three (at least) on their own. In fact, the difference between a cluster and a single machine is not big, but the number of nodes in the cluster must be odd (2n + 1 such as 3, 5, 7 ).
Tool: CentOS.7, zookeeper-3.4.6, Xshell5
Note: zookeeper is written in java and runs in the java environment. Therefore, you must install the java environment before you can continue to install zookeeper. I will not explain it here.
Install java
Ip address of CentOS7_64_1 on the server: 192.168.2.101, hZ? Http://www.bkjia.com/kf/ware/vc/ "target =" _ blank "class =" keylink "> vc3RuYW1lo7poMTwvcD4NCjxoMiBpZD0 =" standalone mode "> standalone Mode
Decompress the installation package
Enter the tar-zxvf zookeeper-3.4.6.tar.gz to unzip it, as shown below:
Create data and logs Directories
Decompress the package and enter the zookeeper-3.4.6 directory to create the data DIRECTORY and logs directory. zookeeper does not include these two directories by default. You need to create and specify them by yourself.
[Grid @ h1 zookeeper-3.4.6] $ mkdir data
[Grid @ h1 zookeeper-3.4.6] $ mkdir logs
Create a myid File
Create a myid file under dataDir =/home/grid/zookeeper-3.4.6/data
Edit the myid file and enter the corresponding number on the machine of the corresponding IP address. For example, on the first zookeeper, we specify 1 for the content of the myid file. If you only install and configure on a single point of failure, there will be only one server, and there will be a multi-state server later in the cluster, so there will be 2, 3, 4... And so on.
[Grid @ h1 data] $ vi myid 1
Copy and modify the configuration file
Go to the zookeeper-3.4.6/conf directory and copy the zoo_sample.cfg file named zoo. cfg.
[Grid @ h1 conf] $ cp zoo_sample.cfg zoo. cfg
Note: The reason why zoo. cfg is named is because it is read by default at startup.
Server.1 = h1: 2888: 3888 is interpreted as follows:
1 refers to a number that corresponds to the myid created earlier. It indicates the number of machines. h1 is the ing name configured by me. You can change h1 to your own ip address directly, for example, server.1 = 192.168.2.101: 2888: 3888;
Hosts ing configuration: vi/etc/hosts. Enter a name corresponding to your own ip address. Similar to windows, this configuration step can be ignored, and it is directly in the zoo under zookeeper. enter the IP address of the cfg file.
2888 indicates the end where the server exchanges information with the Leader server in the cluster. Port 2888 is the communication port between the zookeeper service;
Port 3888 is the port on which zookeeper communicates with other applications.
Other cfg Parameters
TickTime = 2000
TickTime is the interval between the Zookeeper server or between the client and the server to maintain the heartbeat, that is, each tickTime will send a heartbeat.
InitLimit = 10
The initLimit configuration item is used to configure Zookeeper to accept the client (the client mentioned here is not the client that the user connects to the Zookeeper server, but the Follower server connected to the Leader in the Zookeeper server cluster) the maximum heartbeat interval that can be tolerated during connection initialization. When the length of the heartbeat exceeds 10 (tickTime), the Zookeeper server does not receive the response from the client, which indicates that the connection to the client fails. The total length is 10*2000 = 20 seconds.
SyncLimit = 5
The syncLimit configuration item identifies the length of time for sending a message, request, and response between the Leader and Follower. the maximum length of a message is 5*2000 = 10 seconds.
DataDir =/home/grid/zookeeper-3.4.6/data
DataDir, as its name implies, is the directory where Zookeeper saves data. By default, Zookeeper stores the log files that write data in this directory.
ClientPort = 2181
The clientPort is the port connecting the client (Application) to the Zookeeper server. Zookeeper will listen on this port to accept access requests from the client.
Add environment variable
Modify the. bash_profile file under the user. This file is hidden by default.
[Grid @ h1 data] $ vi/home/grid/. bash_profile. Add the following content:
Export ZOOKEEPER_HOME =/home/grid/zookeeper-3.4.6
Export PATH = $ ZOOKEEPER_HOME/bin: $ PATH
Make the configuration file take effect:
[Grid @ h1 data] $ source/home/grid/. bash_profile
Firewall Configuration
Open the port to be used in the firewall. Generally, port 22 is opened by default, so we can use the remote tool to connect to port 22. Now we need to configure port 2181 2888, switch to the root user and execute the following command:
Chkconfig iptables on
Service iptables start Firewall
An error is reported when I set it here. You have to solve the error.
Solution: Run yum install iptables-services to download and install the plug-in.
After installation, run the chkconfig iptables on and service iptables start commands again.
Next, open the firewall port.
[Root @ h1 ~] # Vi/etc/sysconfig/iptables
Copy the row of port 22 three times and change the port to the three to be opened, as shown below:
Restart Firewall
[Root @ h1 ~] # Service iptables restart
Enable zookeeper
Start and test zookeeper (use the grid user to start, do not use the root account), and execute it under the bin in the zookeeper directory.
[Grid @ h1 bin] $./zkServer. sh start
Run the jps command to check the status. QuorumPeerMain is the zookeeper process and starts normally.
View the output information of the zookeeper service.
/Home/grid/zookeeper-3.4.6/bin/zookeeper. out
[Grid @ h1 bin] $ tail-222f zookeeper. out View
Cluster Mode
Ip address of CentOS7_64_1 on the server: 192.168.2.101
Ip address of CentOS7_64_2 on the server: 192.168.2.102
Ip address of CentOS7_64_3 on the server: 192.168.2.103
First, we configure the other two instances according to the above method. First, ensure that zookeeper is successfully started on each machine. After the configuration, we now have three single-host zookeeper instances, then, configure the cluster.
Hosts Configuration
First, modify the hosts ing configuration of the three virtual machines: vi/etc/hosts. Add the ing of their own ip address and hosts alias to the three virtual machines respectively (automatically effective after modification), as shown below:
Modify firewall port
Next, configure the ports of the three servers, and change the firewall to the corresponding ports below.
Machine 1-Port: 2181,2881, 3881
Machine 2-Port: 3882
Machine 3-Port: 3883
Zookeeper Configuration
Vi/conf/zoo. cfg. Pay attention to the port number parameters for the three machines.
Modify the values of the myid file in the data folder to 1, 2, and 3.
Start zookeeper
When I started the first service, the following exception occurred, so I thought which configuration of the firewall did not take effect.
Exception information:
Cannot open channel to 2 at election address h2/192.168.2.102: 3882 java.net. NoRouteToHostException: No route to the host
It should be because the firewall configuration has not taken effect, so the Firewall service iptables restart of three machines is restarted,
The following are the exceptions that I throw after restarting.
Exception information:
Cannot open channel to 1 at election address h1/192.168.2.101: 3881 java.net. ConnectException: Connection denied
Resolve exceptions
At first, I thought it was waiting for another machine to start, but this problem had always occurred after the other two machines were started. So it took me half an hour to check the cause. Later I checked and found no configuration problems, however, the hosts file of the three machines contains the following content. I decided to delete the file and restart it.
127.0.0.1 localhost h1 localhost4 localhost4.localdomain4
: 1 localhost h1 localhost6 localhost6.localdomain6
Reboot successful again
The reason for the error is that the cluster environment is waiting for several other machines. Otherwise, one machine will not be able to perform the election and other operations. After the second machine is started, it will return to normal.
View status
Next, run the status command to view the respective statuses.
The first machine to be started: Machine 3-leader
The second machine started: Machine 2-followers
Start the third machine: Machine 1-followers
Set the service to start at startup
Configure zookeeper to start the grid user at startup, otherwise it will be very troublesome in the production environment
Edit the/etc/rc. local file and add: export JAVA_HOME =/usr/local/jdk #. Otherwise, the following java services cannot start.
Start a user other than root:
Export JAVA_HOME =/usr/local/jdk su-grid-c '/home/grid/zookeeper-3.4.6/bin/zkServer. sh start'
Root User Startup:
Export JAVA_HOME =/usr/local/jdk '/home/grid/zookeeper-3.4.6/bin/zkServer. sh start
Note: su-grid refers to the user who switches to the grid, and-c refers to the command that calls the following
After adding the file, check whether/etc/rc. local has the permission to execute the file. By default, the file has no permission to execute the file.
Chmod + x/etc/rc. d/rc. local
After the restart, it is found that/etc/rc. local can be executed.
High Availability: Once the leader stops service, the rest of the follower will elect a leader. You can try to see the status change.