Installing the Kafka cluster _php tutorial on CentOS

Source: Internet
Author: User
Tags zookeeper

Installing the Kafka cluster on CentOS


Installation Preparation:
Version
Kafka version: kafka_2.11-0.9.0.0
Zookeeper version: zookeeper-3.4.7
Zookeeper cluster: bjrenrui0001 bjrenrui0002 bjrenrui0003
Zookeeper cluster construction See: Installing Zookeeper clusters on CentOS

Physical environment
Install three physical machines:
192.168.100.200 bjrenrui0001 (run 3 broker)
192.168.100.201 bjrenrui0002 (run 2 broker)
192.168.100.202 bjrenrui0003 (run 2 broker)
The creation of the cluster is divided into three steps, single-node single broker, single-node multi-broker, multi-node multi-broker

Single node single broker
This section takes the example of creating a broker on bjrenrui0001
Download Kafka:
Download path: http://kafka.apache.org/downloads.html
cd/mq/
wget http://mirrors.hust.edu.cn/apache/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz
copyfiles.sh kafka_2.11-0.9.0.0.tgz bjyfnbserver/mq/
Tar zxvf kafka_2.11-0.9.0.0.tgz-c/mq/
Ln-s/mq/kafka_2.11-0.9.0.0/mq/kafka
Mkdir/mq/kafka/logs

Configuration
Modify Config/server.properties
Vi/mq/kafka/config/server.properties
Broker.id=1
listeners=plaintext://:9092
port=9092
Num.network.threads=3
Num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
Log.dirs=/mq/kafka/logs/kafka-logs
num.partitions=10
Num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
Log.cleaner.enable=false
Zookeeper.connect=bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181gg
zookeeper.connection.timeout.ms=6000

To start the Kafka service:
Cd/mq/kafka;sh Bin/kafka-server-start.sh-daemon config/server.properties
Or
Sh/mq/kafka/bin/kafka-server-start.sh-daemon/mq/kafka/config/server.properties

Netstat-ntlp|grep-e ' 2181|9092 '
(Not all processes could is identified, non-owned process info
Would is not being shown, you would has to be the root to see it all.)
TCP6 0 0::: 9092:::* LISTEN 26903/java
TCP6 0 0::: 2181:::* LISTEN 24532/java

Create topic:
sh/mq/kafka/bin/kafka-topics.sh--create--zookeeper bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181-- Replication-factor 1--partitions 1--topic test

View topic:
sh/mq/kafka/bin/kafka-topics.sh--list--zookeeper bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181

Producer Send Message:
$ sh/mq/kafka/bin/kafka-console-producer.sh--broker-list bjrenrui0001:9092--topic test
First
Message

Consumer Receive message:
$ sh bin/kafka-console-consumer.sh--zookeeper bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181--topic test-- From-beginning
First
Message
If you want the latest data, you can do so without the--from-beginning parameter.

Single-node multiple broker
Copy the folders in the previous section two copies to Kafka_2,kafka_3
Cp-r/mq/kafka_2.11-0.9.0.0/mq/kafka_2.11-0.9.0.0_2
Cp-r/mq/kafka_2.11-0.9.0.0/mq/kafka_2.11-0.9.0.0_3
Ln-s/mq/kafka_2.11-0.9.0.0_2/mq/kafka_2
Ln-s/mq/kafka_2.11-0.9.0.0_3/mq/kafka_3

Modify the Broker.id in the Kafka_2/config/server.properties and kafka_3/config/server.properties files, respectively, and the port property to ensure uniqueness
Vi/mq/kafka_2/config/server.properties
broker.id=2
listeners=plaintext://:9093
port=9093
host.name=bjrenrui0001
Num.network.threads=3
Num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
Log.dirs=/mq/kafka_2/logs/kafka-logs
num.partitions=10
Num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
Log.cleaner.enable=false
zookeeper.connect=bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181
zookeeper.connection.timeout.ms=6000

Vi/mq/kafka_3/config/server.properties
Broker.id=3
listeners=plaintext://:9094
port=9094
host.name=bjrenrui0001
Num.network.threads=3
Num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
Log.dirs=/mq/kafka_3/logs/kafka-logs
num.partitions=10
Num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
Log.cleaner.enable=false
zookeeper.connect=bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181
zookeeper.connection.timeout.ms=6000

Start
Start another two brokers:
Sh/mq/kafka_2/bin/kafka-server-start.sh-daemon/mq/kafka_2/config/server.properties
Sh/mq/kafka_3/bin/kafka-server-start.sh-daemon/mq/kafka_3/config/server.properties

Check Port:
[dreamjobs@bjrenrui0001 config]$ netstat-ntlp|grep-e ' 2181|909[2-9] ' |sort-k3
(Not all processes could is identified, non-owned process info
Would is not being shown, you would has to be the root to see it all.)
TCP6 0 0::: 2181:::* LISTEN 24532/java
TCP6 0 0::: 9092:::* LISTEN 26903/java
TCP6 0 0::: 9093:::* LISTEN 28672/java
TCP6 0 0::: 9094:::* LISTEN 28734/java

Create a topic with a replication factor of 3:
sh/mq/kafka/bin/kafka-topics.sh--create--zookeeper bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181-- Replication-factor 3--partitions 1--topic my-replicated-topic

To view the status of topic:
$ sh/mq/kafka/bin/kafka-topics.sh--describe-zookeeper bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181-- Topic My-replicated-topic
Topic:my-replicated-topic partitioncount:1 replicationfactor:3 configs:
Topic:my-replicated-topic partition:0 leader:3 replicas:3,1,2 isr:3,1,2

As can be seen from the above, the topic contains 1 Part,replicationfactor of 3, and Node3 is Leador
The explanations are as follows:
"Leader" is the node responsible-reads and writes for the given partition. Each node would be is the leader for a randomly selected portion of the partitions.
"Replicas" is the list of nodes, replicate the log for this partition regardless of whether they was the leader or Eve N if they is currently alive.
"ISR" is the set of "In-sync" replicas. This is the subset of the replicas list, which is currently alive and caught-up to the leader.

Take a look at the previously created test topic, which can be seen without replication
$ sh/mq/kafka/bin/kafka-topics.sh--describe--zookeeper bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181-- Topic test
Topic:test partitioncount:1 replicationfactor:1 configs:
Topic:test partition:0 leader:1 replicas:1 isr:1

Multiple Broker for multiple nodes
Extract the downloaded files to kafka_4,kafka_5,kafka_6 two folders on bjrenrui0002, bjrenrui0003, respectively. Then copy the server.properties configuration file on the bjrenrui0001 to these three folders
Vi/mq/kafka_4/config/server.properties
Broker.id=4
listeners=plaintext://:9095
port=9095
host.name=bjrenrui0002
Num.network.threads=3
Num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
Log.dirs=/mq/kafka_4/logs/kafka-logs
num.partitions=10
Num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
Log.cleaner.enable=false
zookeeper.connect=bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181
zookeeper.connection.timeout.ms=6000

Vi/mq/kafka_5/config/server.properties
Broker.id=5
listeners=plaintext://:9096
port=9096
host.name=bjrenrui0002
Num.network.threads=3
Num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
Log.dirs=/mq/kafka_5/logs/kafka-logs
num.partitions=10
Num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
Log.cleaner.enable=false
zookeeper.connect=bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181
zookeeper.connection.timeout.ms=6000

Vi/mq/kafka_6/config/server.properties
Broker.id=6
listeners=plaintext://:9097
port=9097
host.name=bjrenrui0003
Num.network.threads=3
Num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
Log.dirs=/mq/kafka_6/logs/kafka-logs
num.partitions=10
Num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
Log.cleaner.enable=false
zookeeper.connect=bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181
zookeeper.connection.timeout.ms=6000

Vi/mq/kafka_7/config/server.properties
Broker.id=7
listeners=plaintext://:9098
port=9098
host.name=bjrenrui0003
Num.network.threads=3
Num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
Log.dirs=/mq/kafka_7/logs/kafka-logs
num.partitions=10
Num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
Log.cleaner.enable=false
zookeeper.connect=bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181
zookeeper.connection.timeout.ms=6000

Start the service
Sh/mq/kafka/bin/kafka-server-start.sh-daemon/mq/kafka/config/server.properties
Sh/mq/kafka_2/bin/kafka-server-start.sh-daemon/mq/kafka_2/config/server.properties
Sh/mq/kafka_3/bin/kafka-server-start.sh-daemon/mq/kafka_3/config/server.properties

Sh/mq/kafka_4/bin/kafka-server-start.sh-daemon/mq/kafka_4/config/server.properties
Sh/mq/kafka_5/bin/kafka-server-start.sh-daemon/mq/kafka_5/config/server.properties

Sh/mq/kafka_6/bin/kafka-server-start.sh-daemon/mq/kafka_6/config/server.properties
Sh/mq/kafka_7/bin/kafka-server-start.sh-daemon/mq/kafka_7/config/server.properties

Check:
$ Netstat-ntlp|grep-e ' 2181|909[2-9] ' |sort-k3

Stop service:
sh/mq/kafka/bin/kafka-server-stop.sh
If you use the script to stop the broker service, the single node on the multi-broker service will be stopped, cautious!!!
PS Ax | Grep-i ' Kafka\. Kafka ' | grep java | Grep-v grep | awk ' {print '} ' | Xargs Kill-sigterm

So far, 7 brokers on three physical machines have been started:
[dreamjobs@bjrenrui0001 bin]$ netstat-ntlp|grep-e ' 2181|909[2-9] ' |sort-k3
(Not all processes could is identified, non-owned process info
Would is not being shown, you would has to be the root to see it all.)
TCP6 0 0::: 2181:::* LISTEN 24532/java
TCP6 0 0::: 9092:::* LISTEN 33212/java
TCP6 0 0::: 9093:::* LISTEN 32997/java
TCP6 0 0::: 9094:::* LISTEN 33064/java

[dreamjobs@bjrenrui0002 config]$ netstat-ntlp|grep-e ' 2181|909[2-9] ' |sort-k3
(Not all processes could is identified, non-owned process info
Would is not being shown, you would has to be the root to see it all.)
TCP6 0 0::: 2181:::* LISTEN 6899/java
TCP6 0 0::: 9095:::* LISTEN 33251/java
TCP6 0 0::: 9096:::* LISTEN 33279/java

[dreamjobs@bjrenrui0003 config]$ netstat-ntlp|grep-e ' 2181|909[2-9] ' |sort-k3
(Not all processes could is identified, non-owned process info
Would is not being shown, you would has to be the root to see it all.)
TCP 0 0 0.0.0.0:2181 0.0.0.0:* LISTEN 14562/java
TCP 0 0 0.0.0.0:9097 0.0.0.0:* LISTEN 23246/java
TCP 0 0 0.0.0.0:9098 0.0.0.0:* LISTEN 23270/java

Producer Send Message:
$ sh/mq/kafka/bin/kafka-console-producer.sh--broker-list bjrenrui0001:9092--topic my-replicated-topic

Consumer Receive message:
$ sh/mq/kafka_4/bin/kafka-console-consumer.sh--zookeeper bjrenrui0001:2181,bjrenrui0002:2181,bjrenrui0003:2181-- Topic My-replicated-topic--from-beginning

http://www.bkjia.com/PHPjc/1091524.html www.bkjia.com true http://www.bkjia.com/PHPjc/1091524.html techarticle install Kafka cluster installation on CentOS: Version Kafka version: kafka_2.11-0.9.0.0 Zookeeper version: zookeeper-3.4.7 Zookeeper cluster: bjrenrui0001 bjrenrui0002 bjrenrui0003 Z ...

  • Related Article

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.