Install a Kafka cluster on Centos
Installation preparation:
Version
Kafka: kafka_2.11-0.9.0.0
Zookeeper version: zookeeper-3.4.7
Zookeeper cluster: bjrenrui0001 bjrenrui0002 bjrenrui0003
For how to build a Zookeeper cluster, see installing ZooKeeper cluster on CentOS.
Physical Environment
Install three hosts:
192.168.100.200 bjrenrui0001 (run 3 brokers)
192.168.100.201 bjrenrui0002 (run 2 brokers)
192.168.100.202 bjrenrui0003 (run 2 brokers)
This cluster is mainly divided into three steps: Single-node single-Broker, single-node multi-Broker, multi-node multi-Broker
Single-node single Broker
This section uses bjrenrui0001 as an example to create a Broker.
Download kafka:
Http://kafka.apache.org/downloads.html download path
Cd/mq/
Wget http://mirrors.hust.edu.cn/apache/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz
Copyfiles. sh kafka_2.11-0.9.0.0.tgz bjyfnbserver/mq/
Tar zxvf kafka_2.11-0.9.0.0.tgz-C/mq/
Ln-s/mq/kafka_2.11-0.9.0.0/mq/kafka
Mkdir/mq/kafka/logs
Configuration
Modify config/server. properties
Vi/mq/kafka/config/server. properties
Broker. id = 1
Listeners = PLAINTEXT: // 9092
Port = 9092
Num. network. threads = 3
Num. io. threads = 8
Socket. send. buffer. bytes = 1048576
Socket. receive. buffer. bytes = 1048576
Socket. request. max. bytes = 104857600
Log. dirs =/mq/kafka/logs/kafka-logs
Num. partitions = 10
Num. recovery. threads. per. data. dir = 1
Log. retention. hours = 168
Log. segment. bytes = 1073741824
Log. retention. check. interval. ms = 300000
Log. cleaner. enable = false
Zookeeper. connect = bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181GG
Zookeeper. connection. timeout. ms = 6000
Start the Kafka service:
D/mq/kafka; sh bin/kafka-server-start.sh-daemon config/server. properties
Or
Sh/mq/kafka/bin/kafka-server-start.sh-daemon/mq/kafka/config/server. properties
Netstat-ntlp | grep-E '2017 | 100'
(Not all processes cocould be identified, non-owned process info
Will not be shown, you wowould have to be root to see it all .)
Tcp6 0 0: 9092: * LISTEN 26903/java
Tcp6 0 0: 2181: * LISTEN 24532/java
Create a Topic:
Sh/mq/kafka/bin/kafka-topics.sh -- create -- zookeeper bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181 -- replication-factor 1 -- partitions 1 -- topic test
View topics:
Sh/mq/kafka/bin/kafka-topics.sh -- list -- zookeeper bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181
Message sent by producer:
$ Sh/mq/kafka/bin/kafka-console-producer.sh -- broker-list bjrenrui0001: 9092 -- topic test
First
Message
Consumer receives messages:
$ Sh bin/kafka-console-consumer.sh -- zookeeper bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181 -- topic test -- from-beginning
First
Message
If you want the latest data, you can just remove the -- from-beginning parameter.
Multiple brokers in a Single Node
Copy the folders in the previous chapter to kafka_2 and kafka_3.
Cp-r/mq/kafka_2.11-0.9.0.0/mq/kafka_2.11-0.9.0.0_2
Cp-r/mq/kafka_2.11-0.9.0.0/mq/kafka_2.11-0.9.0.0_3
Ln-s/mq/kafka_2.11-0.9.0.0_2/mq/kafka_2
Ln-s/mq/kafka_2.11-0.9.0.0_3/mq/kafka_3
Modify the broker. id and port attributes in kafka_2/config/server. properties and kafka_3/config/server. properties respectively to ensure uniqueness.
Vi/mq/kafka_2/config/server. properties
Broker. id = 2
Listeners = PLAINTEXT: // 9093
Port = 9093
Host. name = bjrenrui0001
Num. network. threads = 3
Num. io. threads = 8
Socket. send. buffer. bytes = 1048576
Socket. receive. buffer. bytes = 1048576
Socket. request. max. bytes = 104857600
Log. dirs =/mq/kafka_2/logs/kafka-logs
Num. partitions = 10
Num. recovery. threads. per. data. dir = 1
Log. retention. hours = 168
Log. segment. bytes = 1073741824
Log. retention. check. interval. ms = 300000
Log. cleaner. enable = false
Zookeeper. connect = bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181
Zookeeper. connection. timeout. ms = 6000
Vi/mq/kafka_3/config/server. properties
Broker. id = 3
Listeners = PLAINTEXT: // 9094
Port = 9094
Host. name = bjrenrui0001
Num. network. threads = 3
Num. io. threads = 8
Socket. send. buffer. bytes = 1048576
Socket. receive. buffer. bytes = 1048576
Socket. request. max. bytes = 104857600
Log. dirs =/mq/kafka_3/logs/kafka-logs
Num. partitions = 10
Num. recovery. threads. per. data. dir = 1
Log. retention. hours = 168
Log. segment. bytes = 1073741824
Log. retention. check. interval. ms = 300000
Log. cleaner. enable = false
Zookeeper. connect = bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181
Zookeeper. connection. timeout. ms = 6000
Start
Start the other two brokers:
Sh/mq/kafka_2/bin/kafka-server-start.sh-daemon/mq/kafka_2/config/server. properties
Sh/mq/kafka_3/bin/kafka-server-start.sh-daemon/mq/kafka_3/config/server. properties
Check Port:
[Dreamjobs @ bjrenrui0001 config] $ netstat-ntlp | grep-E '2017 | 2181 [2-9] '| sort-k3
(Not all processes cocould be identified, non-owned process info
Will not be shown, you wowould have to be root to see it all .)
Tcp6 0 0: 2181: * LISTEN 24532/java
Tcp6 0 0: 9092: * LISTEN 26903/java
Tcp6 0 0: 9093: * LISTEN 28672/java
Tcp6 0 0: 9094: * LISTEN 28734/java
Create a topic with replication factor 3:
Sh/mq/kafka/bin/kafka-topics.sh -- create -- zookeeper bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181 -- replication-factor 3 -- partitions 1 -- topic my-replicated-topic
View the Topic status:
$ Sh/mq/kafka/bin/kafka-topics.sh -- describe-zookeeper bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181 -- topic my-replicated-topic
Topic: my-replicated-topic PartitionCount: 1 ReplicationFactor: 3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 3 Replicas: 3, 1, 2 Isr: 3, 1, 2
From the above content, we can see that the topic contains 1 part, the replicationfactor is 3, and Node3 is leador
Explanation:
"Leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
"Replicas" is the list of nodes that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
"Isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.
Let's take a look at the previously created test topic. We can see that there is no replication.
$ Sh/mq/kafka/bin/kafka-topics.sh -- describe -- zookeeper bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181 -- topic test
Topic: test PartitionCount: 1 ReplicationFactor: 1 Configs:
Topic: test Partition: 0 Leader: 1 Replicas: 1 Isr: 1
Multiple brokers of multiple nodes
On bjrenrui0002 and bjrenrui0003, extract the downloaded files to the kafka_4, kafka_5, and kafka_6 folders, and then copy the server. properties configuration files on bjrenrui0001 to these three folders.
Vi/mq/kafka_4/config/server. properties
Broker. id = 4
Listeners = PLAINTEXT: // 9095
Port = 9095
Host. name = bjrenrui0002
Num. network. threads = 3
Num. io. threads = 8
Socket. send. buffer. bytes = 1048576
Socket. receive. buffer. bytes = 1048576
Socket. request. max. bytes = 104857600
Log. dirs =/mq/kafka_4/logs/kafka-logs
Num. partitions = 10
Num. recovery. threads. per. data. dir = 1
Log. retention. hours = 168
Log. segment. bytes = 1073741824
Log. retention. check. interval. ms = 300000
Log. cleaner. enable = false
Zookeeper. connect = bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181
Zookeeper. connection. timeout. ms = 6000
Vi/mq/kafka_5/config/server. properties
Broker. id = 5
Listeners = PLAINTEXT: // 9096
Port = 9096
Host. name = bjrenrui0002
Num. network. threads = 3
Num. io. threads = 8
Socket. send. buffer. bytes = 1048576
Socket. receive. buffer. bytes = 1048576
Socket. request. max. bytes = 104857600
Log. dirs =/mq/kafka_5/logs/kafka-logs
Num. partitions = 10
Num. recovery. threads. per. data. dir = 1
Log. retention. hours = 168
Log. segment. bytes = 1073741824
Log. retention. check. interval. ms = 300000
Log. cleaner. enable = false
Zookeeper. connect = bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181
Zookeeper. connection. timeout. ms = 6000
Vi/mq/kafka_6/config/server. properties
Broker. id = 6
Listeners = PLAINTEXT: // 9097
Port = 9097
Host. name = bjrenrui0003
Num. network. threads = 3
Num. io. threads = 8
Socket. send. buffer. bytes = 1048576
Socket. receive. buffer. bytes = 1048576
Socket. request. max. bytes = 104857600
Log. dirs =/mq/kafka_6/logs/kafka-logs
Num. partitions = 10
Num. recovery. threads. per. data. dir = 1
Log. retention. hours = 168
Log. segment. bytes = 1073741824
Log. retention. check. interval. ms = 300000
Log. cleaner. enable = false
Zookeeper. connect = bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181
Zookeeper. connection. timeout. ms = 6000
Vi/mq/kafka_7/config/server. properties
Broker. id = 7
Listeners = PLAINTEXT: // 9098
Port = 9098
Host. name = bjrenrui0003
Num. network. threads = 3
Num. io. threads = 8
Socket. send. buffer. bytes = 1048576
Socket. receive. buffer. bytes = 1048576
Socket. request. max. bytes = 104857600
Log. dirs =/mq/kafka_7/logs/kafka-logs
Num. partitions = 10
Num. recovery. threads. per. data. dir = 1
Log. retention. hours = 168
Log. segment. bytes = 1073741824
Log. retention. check. interval. ms = 300000
Log. cleaner. enable = false
Zookeeper. connect = bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181
Zookeeper. connection. timeout. ms = 6000
Start the service
Sh/mq/kafka/bin/kafka-server-start.sh-daemon/mq/kafka/config/server. properties
Sh/mq/kafka_2/bin/kafka-server-start.sh-daemon/mq/kafka_2/config/server. properties
Sh/mq/kafka_3/bin/kafka-server-start.sh-daemon/mq/kafka_3/config/server. properties
Sh/mq/kafka_4/bin/kafka-server-start.sh-daemon/mq/kafka_4/config/server. properties
Sh/mq/kafka_5/bin/kafka-server-start.sh-daemon/mq/kafka_5/config/server. properties
Sh/mq/kafka_6/bin/kafka-server-start.sh-daemon/mq/kafka_6/config/server. properties
Sh/mq/kafka_7/bin/kafka-server-start.sh-daemon/mq/kafka_7/config/server. properties
Check:
$ Netstat-ntlp | grep-E '2017 | 2181 [2-9] '| sort-k3
Stop Service:
Sh/mq/kafka/bin/kafka-server-stop.sh
If you use a script to stop the broker service, multiple broker services on a single node will be stopped. Exercise caution !!!
Ps ax | grep-I 'kafka \. kafka '| grep java | grep-v grep | awk' {print $1} '| xargs kill-SIGTERM
So far, seven brokers on three physical machines have been started:
[Dreamjobs @ bjrenrui0001 bin] $ netstat-ntlp | grep-E '2017 | 2181 [2-9] '| sort-k3
(Not all processes cocould be identified, non-owned process info
Will not be shown, you wowould have to be root to see it all .)
Tcp6 0 0: 2181: * LISTEN 24532/java
Tcp6 0 0: 9092: * LISTEN 33212/java
Tcp6 0 0: 9093: * LISTEN 32997/java
Tcp6 0 0: 9094: * LISTEN 33064/java
[Dreamjobs @ bjrenrui0002 config] $ netstat-ntlp | grep-E '2017 | 2181 [2-9] '| sort-k3
(Not all processes cocould be identified, non-owned process info
Will not be shown, you wowould have to be root to see it all .)
Tcp6 0 0: 2181: * LISTEN 6899/java
Tcp6 0 0: 9095: * LISTEN 33251/java
Tcp6 0 0: 9096: * LISTEN 33279/java
[Dreamjobs @ bjrenrui0003 config] $ netstat-ntlp | grep-E '2017 | 2181 [2-9] '| sort-k3
(Not all processes cocould be identified, non-owned process info
Will not be shown, you wowould have to be root to see it all .)
Tcp 0 0 0.0.0.0: 2181 0.0.0.0: * LISTEN 14562/java
Tcp 0 0 0.0.0.0: 9097 0.0.0.0: * LISTEN 23246/java
Tcp 0 0 0.0.0.0: 9098 0.0.0.0: * LISTEN 23270/java
Message sent by producer:
$ Sh/mq/kafka/bin/kafka-console-producer.sh -- broker-list bjrenrui0001: 9092 -- topic my-replicated-topic
Consumer receives messages:
$ Sh/mq/kafka_4/bin/kafka-console-consumer.sh -- zookeeper bjrenrui0001: 2181, bjrenrui0002: 2181, bjrenrui0003: 2181 -- topic my-replicated-topic -- from-beginning