Kafka topic offset requirements
Brief: during development, we often consider it necessary to modify the offset of a consumer instance for a certain topic of kafka. How to modify it? Why is it feasible? In fact, it is very easy. Sometimes we only need to think about it in another way. If I implement
Flume and Kakfa example (KAKFA as Flume sink output to Kafka topic)To prepare the work:$sudo mkdir-p/flume/web_spooldir$sudo chmod a+w-r/flumeTo edit a flume configuration file:$ cat/home/tester/flafka/spooldir_kafka.conf# Name The components in this agentAgent1.sources = WeblogsrcAgent1.sinks = Kafka-sinkAgent1.channels = Memchannel# Configure The sourceAgent1.s
Kafka producer production data to Kafka exception: Got error produce response with correlation ID-on topic-partition ... Error:network_exception1. Description of the problem2017-09-13 15:11:30.656 o.a.k.c.p.i.Sender [WARN] Got error produce response with correlation id 25 on topic-partition test2-rtb-camp-pc-hz-5, retr
partition:1 leader:0 replicas:0 isr:0
I only have one broker,id for 0.
ID of the broker where the leader:0,partition Leader
Replicas:0,partition copy of the Brokerlist
Isr:0, representing the ID of the available broker
Query topic details
[Root@shb01 bin]# kafka-topics.sh--describe--zookeeper shb01:2181
Modified, can only increase the number of partition can not be reduced
[Root@shb01 bin]#
replicas:0 isr:0
The number of partition corresponding to the Partitioncount:topic
Replicationfactor:topic corresponds to the copy factor, White is the number of replicas
Partition:partition number, increment starting from 0
Leader: The current partition works Breaker.id
Replicas: The current replica data sits in the Breaker.id, is a list of the top-most of its functions
ISR: List of breaker.id available in the current KAKFA cluster
Modify Topic
Can
1 runs as leader, and now we kill it:
PS | grep server-1.properties7564 ttys002 0:15.91/system/library/frameworks/javavm.framework/
Versions/1.6/home/bin/java ... kill-9 7564The other node is selected for Leader,node 1 no longer appears in the In-sync replica list:
bin/kafka-topics.sh--describe--zookeeper localhost:218192--topic my-replicated-topic
Topic:my-r
Tags: send zookeeper rod command customer Max AC ATI BlogThe content of this section:
Create Kafka Topic
View all Topic lists
View specified topic information
Console to topic Production data
Data from the console consumption
message in a time complexity of O (1), which is independent of the file size, deleting the file here is not related to Kafka performance, and the choice of how to delete the policy is only relevant to the disk and the specific requirements. In addition, Kafka will retain some metadata information for each consumer group – the position of messages currently consu
Kafka officially provided two scripts to manage the topic, including additions and deletions to topic. Where kafka-topics.sh is responsible for the creation and deletion of topic, kafka-configs.sh script is responsible for
Kafka How to read the offset topic content (__consumer_offsets)
As we all know, since zookeeper is not suitable for frequent write operations in large quantities, the new version Kafka has recommended that consumer's displacement information be kept in topic within Kafka, _
One of the most important features of the Kafka theme is the ability to let consumers specify the subset of messages they want to consume. In extreme cases, it may not be a good idea to put all your data in the same topic, because consumers cannot choose the events they are interested in-they need to consume all the messages. Another extreme situation, having millions of different themes is not a good idea,
Generally in the Kafka consumer can set up a number of themes, that in the same program needs to send Kafka different topics of the message, such as exceptions need to send to the exception topic, normal to send to the normal topic, this time you need to instantiate a number of topics, and then send each.Use the Rdkafk
Structure:Nginx-flume->kafka->flume->kafka (because involved in the cross-room problem, between the two Kafka added a flume, egg pain. )Phenomenon:In the second layer, write Kafka topic and read Kafka
If you are using Kafka to distribute messages, there may be exceptions or other errors in the process of data processing that can result in loss or inconsistency. This time you may want to Kafka the data through the new process, we know that Kafka by default will be saved on disk to 7 days of data, you just need to Kafka
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.