directory
There are two manually created directories under Zookeeper Zkdata and Zkdatalog
4. Configuration file Explanation
This time is as Ticktime time will send a heartbeat. #initLimit: Zookeeper accepts the maximum number of heartbeat intervals that can be endured by the client (where the client is not connected to the Follower server in the Zookeeper ser
. Start the Zookeeper service
Since zookeeper is already available in the Kafka package, the script that launches the Kafka (in the Kafka_2.10-0.8.2.2/bin directory) and the Zookeeper configuration file (in KAFKA_2.10-0.8.2.2/ Config
/zookeeper/zookeeper-3.4.9
1
1
Note: The default profile does not have a case file directly zoo_sample.cfgWhen you use it, copy one yourself.cp /usr/local/zookeeper/zookeeper-3.4.9/conf/zoo_sample.cfg /usr/local/zookeeper/
decompression. 2. Start Zookeeper service
Because zookeeper is already in the Kafka's compressed package, it provides a script to start Kafka (under the Kafka_2.10-0.8.2.2/bin directory) and Zookeeper configuration file (in KAFKA_2.10-0.8.2.2/ Config directory):
[Root@maste
valuesStart Zookeeper./zkserver.sh Start3) Zk-2 Adjust profile (other configuration and zk-0 one):clientport=2183# #只需要修改上述配置, other configurations leave default valuesStart Zookeeper./zkserver.sh StartTwo. Kafka Cluster constructionBecause the broker configuration file involves the related conventions of zookeeper, w
are
/zyxx_data/zookeeper/data00/zyxx_data/zookeeper/data01/zyxx_data/zookeeper/data02
After creating the corresponding directory, in each of the three directories to create a file named myID, the file content only a number, representing the unique ID of the zookeeper node, that is, to ensure that the
To demonstrate the effect of the cluster, a virtual machine (window 7) is prepared, and a single IP multi-node zookeeper cluster is built in the virtual machine (the same is true for multiple IP nodes), and Kafka is installed in both native (Win 7) and virtual machines.Pre-preparation instructions:1. Three zookeeper servers, the local installation of one as Serve
, view the status (all nodes)
./Zkserver. Sh start/stop/status
Note: In the status, the mode shows the roles played by the server in the cluster. The roles of each server are not fixed. The leader is generated by the zookeeper fast Leader Election Algorithm. Now, the zookeeper cluster has been set up, and the corresponding configuration file is modified according to the actual business needs.
3. Build a
ID 1 or 2 or 3 echo "1" > myID 5: Turn off Firewall (best practice is to find OPS to configure firewall policy instead of shutting down) clush-g Ka FKA "Service iptables status" clush-g Kafka "service Iptables stop" 6: Start zookeeper for all nodes (other nodes are also configured to Zoo.cfg and create/tmp/zoo Keeper myID) clush-g kafka/opt/
: cddatadir (/disk0/var/zookeeper/datas) echo1>myid Note: 1 is the value of myID in the Id,serverx of the zookeeper instance. 3. Start the Server1,server2,server3 Zookeeper Instance Start command in turn: $ZOOKEEPER _home/bin/zkserver.shstart Stop command: $ZOOKEEPER _home
=10synclimit=5datadir=/home/young/zookeeper/dataclientport=2181Don't forget to create a new DataDir directory:Mkdir/home/young/zookeeper/dataCreate an environment variable for zookeeper, open the/etc/profile file, and at the very end add the following:Vi/etc/profileAdd content as follows:Export Zookeeper_home=/home/young/zookeeperexport path=.: $
evaluation. Or Try:help.Scala> : Quitc:\users\zyx>1.3.4. Thriftc:\users\zyx>thrift-versionThrift version 0.11.01.3.5. Zookeeper1.3.5.1. ConfigurationIn the D:\Project\ServiceMiddleWare\zookeeper-3.4.10\conf directory, create a zoo.cfg file that reads as followsticktime=2000datadir=d:/project/servicemiddleware/zookeeper-3.4.10/data/dbDatalogdir=d:/project/servicemiddleware/
The previous section was completed hive, this section we will build zookeeper, mainly behind the Kafka need to run on it.Zookeeper Download and installDownload Zookeeper 3.4.5 software package, can be downloaded in Baidu Network disk. Link: http://pan.baidu.com/s/1gePE9O3 Password: UNMT.Download finished with xftp upload to Spark1 server, I was placed in the/home
Create an empty file under /tmp/logs kafka.log; if there is no logs directory under the/tmp directory , you will need to start by creating a logs directory. 5.3. Create build log dataShellScriptCreate a kafkaoutput.sh script under the Hadoop User directory and give Execute permissions to output content to/tmp/logs/kafka.log. the specific contents of the kafkaoutput.sh script are as follows:For ((i=0;iDo echo "kafka_test-" + $i >>/tmp/logs/kafka.log;Done5.4. StartZookeeperto start the ZK service
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.