Kafka Learning (1) configuration and simple command usage, kafka learning configuration command
1. Introduction to related concepts in Kafka
Kafka is a distributed message middleware implemented by scala. The related concepts are as follows:
- The content transmitted in Kafka is called message. The relationship between topics and messages that are grouped by topic is one-to-many.
- We call the message publishing process as producer, that is, the producer generates the <topic-> message> pair and then drops it into the kafka cluster.
- The process of processing the corresponding message for the subscription topic is consumer.
- Nodes in a Kafka cluster are called brokers.
Additional diagram description: http://kafka.apache.org/documentation.html#introduction
Ii. configuration of key parameters in Kafka Broker (total cluster node) Configuration
Broker. id: the unique log of the int type. dirs: stores kafka data. The default path is/tmp/kafka-logsport: comsumer connection port zookeeper. conect: the link string of zookeeper. The defined format is hostname1: port1, hostname2: port2, hostname3
Num. partitions: a topic can be divided into multiple paritions pipelines. Messages in each partiions are ordered, but the order in multiple paritions is not guaranteed.
2. Consumer Configuration
Group. id: string type indicates the zookeeper of the consumer process group to which the consumer belongs. connect: hostname1: port1, hostname2: port2 (/chroot/path Unified Data Storage path) zookeeper stores the basic information of comsumers and brokers (including topic and partition) of kafka.
3. configure metadata for Producer. book. list: host1: port1, host2: port2request. required. acks: 0. data is submitted directly after completion (data may be lost when the server crashes. wait until the server acknowledges the request as successful-1.no messages lostproducer. type: determines whether messages synchronously submits syncserializer. class: kafka. serializer. the serialization class of DefaultEncoder message. The default encoder processing type is byte [].
3. Simple Kafka commands
Step 1: Start the server
Start zookeeper first
> Bin/zookeeper-server-start.sh config/zookeeper. properties
(During remote startup, you need to add an & as the background process, and then disconnect from the remote connection)
Start the kafka server.
> bin/kafka-server-start.sh config/server.properties
Step 2: Create a topic
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
View topics
> bin/kafka-topics.sh --list --zookeeper localhost:2181
Step 3: Send some messages
> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test This is a messageThis is another message
Step 4: Start a client (consumer)
> bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning