In this scenario, data copying between multiple test zookeeper clusters can solve the problem by directly copying the zk data file.
However, we recently encountered such a problem that we accidentally deleted the data in a path of a cluster. Because the data is not readable, we cannot choose to copy it. Therefore, this scenario can only be solved by code.
I have used two scenarios:
1. Accidentally delete the data and copy it from other zookeeper clusters.
2. Build a test environment and import part of the node data directly to the local device.
Zookeeper manages nodes, which is basically the same as our File System. This requirement becomes very simple and can be directly converted to a recursive path, and then created under the new cluster. The simple implementation is as follows:
Import com. metaboy. common. zk. dao. ZkDaoImpl;
Import com. netflix. curator. framework. CuratorFramework;
Import com. netflix. curator. framework. CuratorFrameworkFactory;
Import com. netflix. curator. retry. RetryNTimes;
Import java. util. List;
/**
* @ Author yuxiong. wangy
* Date: 14-8-14
* Time: PM
*/
Public class ZookeeperDemo {
Protected static CuratorFramework client_src;
Protected static CuratorFramework client_dst;
Protected static String namespace_src;
Protected static String namespace_dst;
Protected static String zkRoot_src = "/app ";
Public static void main (String [] args) throws InterruptedException {
String zkConnectionStr_src = "*. *: 2181, *. *: 2181, *. *: 2181 ";
Client_src = getZKClient (zkConnectionStr_src, namespace_src, 60000 );
String zkConnectionStr_dst = "*. *: 2181, *. *: 2181, *. *: 2181 ";
Client_dst = getZKClient (zkConnectionStr_dst, namespace_dst, 60000 );
CopyDataRecursion (zkRoot_src );
}
/*
* Recursively copy data
*/
Public static void copyDataRecursion (String parent ){
List <String> groups = ZkDaoImpl. getChildren (client_src, parent );
If (groups. size ()> 0 ){
For (String group: groups ){
String path = parent + "/" + group;
If (ZkDaoImpl. getData (client_src, path )! = Null ){
ZkDaoImpl. createPersistentFile (client_dst, path, ZkDaoImpl. getData (client_src, path ));
System. out. println ("[" + path + "]:" + ZkDaoImpl. getData (client_src, path ));
} Else {
ZkDaoImpl. createPersistentFile (client_dst, path );
System. out. println ("[" + path + "]:");
}
If (ZkDaoImpl. getChildren (client_src, path). size ()> 0 ){
CopyDataRecursion (path );
}
}
} Else {
ZkDaoImpl. createPersistentFile (client_dst, parent, ZkDaoImpl. getData (client_src, parent ));
}
}
Public static CuratorFramework getZKClient (String zkConnectionStr, String namespace, int sessionTimeout) throws InterruptedException {
Int connectTimeout = 60000;
Int retry = 3;
Int retryTimeout = 10000;
CuratorFramework client = CuratorFrameworkFactory. builder (). connectString (zkConnectionStr)
. RetryPolicy (new RetryNTimes (retry, retryTimeout). connectionTimeoutMs (connectTimeout)
. SessionTimeoutMs (sessionTimeout). namespace (namespace). build ();
Client. start ();
Client. getZookeeperClient (). blockUntilConnectedOrTimedOut ();
Return client;
}
}
ZkDaoImpl encapsulates a set of ZK operations. Similar to the following, it encapsulates native methods and uses the following methods:
/**
* Getting file data
*
* @ Param path
* File path
* @ Return file content
*/
Public static String getData (CuratorFramework client, String path ){
Try {
Return new String (client. getData (). forPath (path ));
} Catch (Exception e ){
Throw new ZkException (ZkErrors. GET_DATA_EXCEPTION, "path:" + path, e );
}
}