Preparing the server
4 servers, configured in each hosts, named storm Storm1 Storm2 Storm3,storm used as nimbus,drpc,admin etc.
Java environment to be configured well
Deploying Zookeeper Clusters
Create a data, log directory for each zookeeper directory, and create a new file myID under Data, respectively, 1,2,3
ZOO.CFG Configuration
Datadir=/usr/local/zookeeper/data
Datalogdir=/usr/local/zookeeper/log
server.1=storm1:2888:3888
server.2=storm2:2888:3888
server.3=storm3:2888:3888
# Zookeeper log compared to occupy space so need to clean up regularly, the following configuration is cleaned every 24 hours, up to 10 log files reserved
autopurge.snapretaincount=10
Autopurge.purgeinterval=24
Then start each zookeeper to see if the follow leader is correct, such as failure to start the check if the firewall is not open 2181 2888 3888 ports
Deploying Storm Clusters
Modifying Storm.yaml, the following Nimbus configuration, he also undertook the UI and Drpc portal tasks
Storm.zookeeper.servers:
-"Storm1"
-"Storm2"
-"STORM3"
Storm.local.dir: "/usr/local/storm/workspace"
Nimbus.host: "Storm"
ui.port:8787
Storm.messaging.transport: "Backtype.storm.messaging.netty.Context"
Storm.messaging.netty.server_worker_threads:1
Storm.messaging.netty.client_worker_threads:1
storm.messaging.netty.buffer_size:5242880
storm.messaging.netty.max_retries:100
storm.messaging.netty.max_wait_ms:1000
storm.messaging.netty.min_wait_ms:100
Drpc.servers:
-"Storm"
The following is the configuration of the supervisor
Storm.zookeeper.servers:
-"Storm1"
-"Storm2"
-"STORM3"
Storm.local.dir: "/usr/local/storm/workspace"
Nimbus.host: "Storm"
Supervisor.slots.ports:
-6700
-6701
-6702
-6703
Storm.messaging.transport: "Backtype.storm.messaging.netty.Context"
Storm.messaging.netty.server_worker_threads:1
Storm.messaging.netty.client_worker_threads:1
storm.messaging.netty.buffer_size:5242880
storm.messaging.netty.max_retries:100
storm.messaging.netty.max_wait_ms:1000
storm.messaging.netty.min_wait_ms:100
Drpc.servers:
-"Storm"
This is used for netty transmission, more convenient than ZEROMQ
Start of the Storm
Nimbus:
Bin/storm Nimbus >/dev/null 2>&1 &
Bin/storm UI >/dev/null 2>&1 &
Bin/storm drpc >/dev/null 2>&1 &
Supervisor:
Bin/storm Supervisor >/dev/null 2>&1 &
Bin/storm logviewer >/dev/null 2>&1 &
Storm List
Storm kill xxxx
Storm self-esteem
Install Daemontools for each server
Daemontools automatically write the svscanboot to Inittab, starting from the start, but found that svscanboot can not automatically up, do not know which side of the configuration is not correct, delete the boot from the boot, and svscanboot designated/service root directory, The Svscan scan starts with a relative path and does not run the script properly.
So write the run script for supervise, through Nohup supervise/service/* &
Nimbus:
Nohup Supervise/service/storm &
Nohup Supervise/service/storm-ui &
Nohup Supervise/service/storm-drpc &
Supervisor:
Nohup Supervise/service/storm &
Nohup Supervise/service/storm-log &
Nohup Supervise/service/zookeeper &
Because the server is limited, 3 supervisor is also zookeeper cluster, of course start storm need to complete zookeeper start
Restart the server, does not automatically restart, need to manually the above command, but because there is no/service directory already exists supervise directory, which contains the last run of the lock, will prevent this startup, solid boot should be deleted before the supervise folder