Storm Cluster Deployment:
Operating environment: CentOS release 6.3 (Final)
Build zookeeper cluster;
Install storm-dependent libraries;
Download and unzip the storm release version;
Modify the Storm.yaml configuration file;
Start storm each background process.
Ip |
Host Name |
Main control node: 192.168.1.147 |
Zoo1 |
Working node 1:192.168.1.142 |
Zoo2 |
Working node 2:192.168.1.143 |
Zoo3 |
Zookeeper cluster building in the previous article has been written, directly to the next installation Storm dependent library.
Installing ZMQ 2.1.7
wget http://download.zeromq.org/zeromq-2.1.7.tar.gz
TAR-XZF ZEROMQ-2.1.7.TAR.GZCD Zeromq-2.1.7./configuremake
Make install
Precautions:
If the installation process does not find an error uuid, install the UUID library with the following package:
# yum Install uuid*-y
# yum Install e2fsprogs*-y
# yum Install libuuid*-y
Installing JZMQ
Compile and install jzmq after download:
git clone https://github.com/nathanmarz/jzmq.git
CD Jzmq./autogen.sh./configuremake
Make install
Precautions:
Installing the GIT client
# yum Install git git-gui-y
Issues that occur in the operation:
git clone https://github.com/nathanmarz/jzmq.git
Cloning into ' jzmq ' ...
ERROR:SSL certificate problem, verify the CA cert is OK. Details:
Error:14090086:ssl Routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed while accessing https://github.com /nathanmarz/jzmq.git/info/refs?service=git-upload-pack
Fatal:http request failed
Workaround:
Env git_ssl_no_verify=true GIT Clone https://github.com/nathanmarz/jzmq.git
2.2.3 Installing Java 6
1. Download and install JDK 6, refer to here;
2. Configure Java_home environment variables;
3. Run the Java, Javac command to test Java for normal installation.
2.2.4 Installation Python2.6.6
1. Download Python2.6.6:
http
2. Compile and install Python2.6.6:
–JXVF Python-.. -./
3. Test Python2.6.6:
$ python-.
Installing unzip
Yum install unzip -y
Download and unzip the storm release version
Next, you need to install the Storm release on the Nimbus and supervisor machines.
1. To download the storm release, it is recommended to use Storm0.8.1:
Https
2. Unzip to the installation directory:
Storm-..
Modifying the Storm.yaml configuration file
The storm release version has a file under the Extract directory conf/storm.yaml
that is used to configure Storm. The default configuration can be viewed here. conf/storm.yaml
中的配置选项将覆盖
The default configuration in Defaults.yaml. The following configuration options are required in theconf/storm.yaml
中进行配置的:
Master node configuration:
Storm.zookeeper.servers:-"192.168.1.147"-"192.168.1.142"-"192.168.1.143" Storm.local.dir: "/home/admin/storm/ Workdir "
Configuration explanation:
Zookeeper cluster address used by the Storm.zookeeper.servers:Storm cluster
The Storm.local.dir:Nimbus and supervisor processes are used to store a small number of States, such as the local disk directory for jars, Confs, and so on, and require the directory to be created in advance and given sufficient access rights.
Work nodes 1, 2 are configured as follows:
Storm.zookeeper.servers:-"192.168.1.147"-"192.168.1.142"-"192.168.1.143" Storm.local.dir: "/home/admin/storm/ Workdir "Nimbus.host:" 192.168.1.147 "Supervisor.slots.ports:-6700-6701-6702-6703
Configuration explanation:
Zookeeper cluster address used by the Storm.zookeeper.servers:Storm cluster
Storm.local.dir: "/home/admin/storm/workdir": Nimbus and supervisor processes are used to store a small amount of state, such as a local disk directory for jars, CONFS, etc. You need to create the directory ahead of time and give sufficient access rights.
Nimbus.host:Storm cluster Nimbus Machine address, each supervisor work node need to know which machine is Nimbus, in order to download topologies jars, Confs and other files
Supervisor.slots.ports: For each Supervisor worker node, you need to configure the number of workers that the worker node can run. Each worker occupies a separate port for receiving messages, and the configuration option is used to define which ports are available for use by the worker. By default, 4 workers can be run on each node, at 6700, 6701, 6702, and 6703 ports, respectively
Start storm each background process:
Here's how to start storm's various background processes:
Nimbus: Run Bin/storm Nimbus >/dev/null 2>&1 & Start the Nimbus daemon on the Storm master node and put it in the background execution;
Supervisor: Run Bin/storm Supervisor >/dev/null 2>&1 & start Supervisor daemon on the storm's various work nodes and put it into the background to execute;
UI: Run the Bin/storm UI on the Storm master node >/dev/null 2>&1 & Start the UI daemon and put it in the background to execute it after startup via Http://{nimbus host} : 8080 Observe the usage of worker resources in the cluster, the running status of topologies, and other information.
Precautions:
After the storm daemon process is started, the log files for each process are generated under the logs/subdirectory under the Storm installation deployment directory.
After testing, the storm UI must be deployed on the same machine as the Storm Nimbus, otherwise the UI will not work because the UI process checks to see if there is a Nimbus link on the machine.
For ease of use, bin/storm can be added to the system environment variable.
Now that the storm cluster has been deployed and configured, you can submit the topology to the cluster to run.
Submit a task to a cluster
1) Start storm topology:
Storm jar Allmycode.jar org.me.MyTopology arg1 arg2 arg3
Where Allmycode.jar is the jar package that contains the topology implementation code, the main method of Org.me.MyTopology is the entrance to the topology, Arg1, Arg2 and ARG3 require incoming arguments for org.me.MyTopology execution.
2) Stop storm topology:
Storm Kill {Toponame}
where {toponame} is the name of the topology task specified when topology commits to the storm cluster.
This article is from the "david0512" blog, make sure to keep this source http://gjr0512.blog.51cto.com/6518687/1590746
Storm cluster Deployment