The previous section was completed hive, this section we will build zookeeper, mainly behind the Kafka need to run on it.
Zookeeper Download and install
Download Zookeeper 3.4.5 software package, can be downloaded in Baidu Network disk. Link: http://pan.baidu.com/s/1gePE9O3 Password: UNMT.
Download finished with xftp upload to Spark1 server, I was placed in the/home/software directory.
[[email protected] lib]# cd/home/software/-zxf zookeeper-3.4.5.tar.gz // unzip [[ Email protected] software]# mv Zookeeper-3.4.5/usr/lib/zookeeper // rename and move to the /usr/lib directory [[ Email protected] software]# Cd/usr/lib
Set the Zookeeper environment variable.
[[Email protected] lib]# VI ~/.BASHRC // config environment variable // Add variable, don't forget the path variable also to be modified Export Zookeeper_home=/usr/lib/zookeeperexport PATH= $PATH: $JAVA _home/bin: $HADOOP _home/bin: $HADOOP _ Home/sbin: $HIVE _home/bin: $ZOOKEEPER _home/bin // plus ZOOKEEPER path
Save exit, the file is in effect.
When we are done, we begin to configure the Zookeeper configuration file.
- Modify the Zoo_sample.cfg file and rename it to Zoo.cfg.
[Email protected] lib]# CD zookeeper/conf/[[email protected] conf]# mv Zoo_sample.cfg zoo.cfg[[email protected] Conf ]# VI zoo.cfg// modify DataDirdatadir=/usr/lib/zookeeper/data// Add ( Configure a minimum of three nodes)server.0=spark1:2888:3888server. 1=spark2:2888:3888server. 2=spark3:2888:3888
Modify finish to save exit.
Then we go to the/usr/lib/zookeeper directory to create the data folder, set the label.
[email protected] conf]# CD. [[email protected] zookeeper]# mkdir data[[email protected] zookeeper]# CD data// Create a myID file [[Email protected] data]# VI myid// add 00
Modify finish to save exit.
- Copy the configuration files to Spark2 and Spark3, while the myID files are set to 1 and 2 respectively.
[[email protected] data]# cd/usr/lib// Copy to Spark2 [[email protected] lib]# Scp-r Zookeeper [ Email protected]:/usr/lib/~/.BASHRC [email protected]:~/// Copy the past don't forget to execute source ~/on SPARK2. BASHRC command to make effective
A copy is also made on the SPARK3 after completion. ( set the myID file to 1 and 2 respectively)
Start separately on the three servers and check the zookeeper status.
[[email protected] lib]# zookeeper/bin/zkserver.sh start
Three are started after the completion of the view boot.
[Email protected] lib]# zookeeper/bin/default/usr/lib/zookeeper/bin/. /conf/zoo.cfgMode:leader// close zookeeper/bin/zkserver.sh Stop
// Restart zookeeper/bin/zkserver.sh Restart
The first one appears Mode:leader, the other 2 is Mode:follower, represents ok,zookeeper cluster complete!
Scala installation
As we've already talked about the Scala installation process in the second section, it's good to have SPARK2 and Spark3 all installed on Scala, and not much more.
Kafka Installation
Download Kafka 2.9.2 Package , you can download it in Baidu Network disk. Link: http://pan.baidu.com/s/1gePE9O3 Password: UNMT.
Download finished with xftp upload to Spark1 server, I was placed in the/home/software directory.
[Email protected] lib]# cd/home/software/-zxf kafka_2.9.2-0.8.1. tgz [[email protected] software]# MV kafka_2. 9.2-0.8.1/usr/lib//usr/lib
Modify the configuration file Server.properties file.
[[Email protected] lib]# vi kafka/config/server.properties//broker.id is unique, default starting from 0 Broker.id=0// modify Zookeeper.connectzookeeper.connect=spark1:2181,spark2:2181,spark3:2181
SLF4J Installation
Download the SLF4J 1.7.6 package and put it in/
Of
Of
Of
Of
Of
Of
Spark Primer to Mastery-(tenth) environment building (zookeeper and Kafka building)