First, the environment
- One Centos6.5 console
- Mongo 3.0
- kafka_2.11-0.8.2.1
- Storm-0.9.5
- Zookeeper-3.4.6
- Java 1.7 (later because the jar packaged on Mac is not run by the 1.8 compilation, instead Java 1.8)
- Other environment Temporary
Second, the operation starts
- Start Zookeeper
Verify that the configuration is correct, and that the configuration is self-searching.
[Email protected] zookeeper-3.4. 6] #pwd/data0/xxx/zookeeper-3.4. 6 [email protected] Zookeeper-3.4. 6] #bin/zkserver. SH start
Here the path is started in the Zookeeper root directory, and the main thing is that running automatically generates a log file in the current directory. That is, in which directory to start, where the log files are. This is a way to put the log file in that directory.
- Start Kafka
/data0/xxx/kafka_2. One-0.8.2.1/bin/kafka-server-start.SH/data0/xxx/kafka_2. One-0.8.2.1/config/server-0. properties >/data0/xxx/kafka_2. One-0.8.2.1/logs/server-0. log2>&1&/data0/xxx/kafka_2. One-0.8.2.1/bin/kafka-server-start.SH/data0/xxx/kafka_2. One-0.8.2.1/config/server-1. properties >/data0/xxx/kafka_2. One-0.8.2.1/logs/server-1. log2>&1&/data0/xxx/kafka_2. One-0.8.2.1/bin/kafka-server-start.SH/data0/xxx/kafka_2. One-0.8.2.1/config/server-2. properties >/data0/xxx/kafka_2. One-0.8.2.1/logs/server-2. log2>&1&
Here I have 3 Kafka processes on a single machine, as a standalone cluster.
Configuration 0 Example:
Broker.id=0Port=9092Host.name=172.16.0.100num.network.threads=3num.io.threads=8socket.send.buffer.bytes=102400socket.receive.buffer.bytes=102400socket.request.max.bytes=104857600Log.dirs=/data0/xxx/kafka_2.11-0.8.2.1/log_0num.partitions=1Num.recovery.threads.per.data.dir=1log.retention.hours=168log.segment.bytes=1073741824log.retention.check.interval.ms=300000log.cleaner.enable=Falsezookeeper.connect=localhost:2181zookeeper.connection.timeout.ms=6000
server-0.properties
- Start Storm
/data0/xxx/storm-0.9.5/bin/storm Nimbus >/data0/xxx/storm-0.9.5/log/nimbus.log2>&1&/data0/xxx/storm-0.9.5/bin/storm Supervisor >/data0/xxx/storm-0.9.5/log/supervisor.log2>&1&/data0/xxx/storm-0.9.5/bin/storm UI >/data0/xxx/storm-0.9.5/log/ui.log2>&1&/data0/xxx/storm-0.9.5/bin/storm logviewer >/data0/xxx/storm-0.9.5/log/logviewer.log2>&1&
Start Nimbus, supervisor, UI, Logviewer
Example Storm configuration file:
Storm.zookeeper.servers: -"172.16.0.100" Nimbus.host: "172.16.0.100" Storm.local.dir: "/data0/xxx/storm-0.9.5/ Workdir "Storm.messaging.netty.max_retries:30storm.messaging.netty.min_wait_ms:100storm.messaging.netty.max_ Wait_ms:1500supervisor.slots.ports: -6700 -6701 -6702 - 6703supervisor.worker.start.timeout.secs:60
Storm.yaml
Iii. Initialization settings
Create the topic in Kafka, configure the data source import Kafka, and so on. In addition there are mongodb initialization settings, building indexes, and so on.
Iv. Storm Procedures
Java version Storm program consolidates Kafka, MongoDB samples, and deployment