The storage structure of Kafka in zookeeper

Source: Internet
Author: User
Tags zookeeper client

Reference Site:http://kafka.apache.org/documentation.html#impl_zookeeper

1. Zookeeper Client Related commands

When you are sure that the Zookeeper service is up, connect to the client by Bin/zkcli.sh-server 127.0.0.1:2181 the command

The simple operation is as follows:

  1. Display the root directory, file: LS/use the LS command to view the content contained in the current ZooKeeper

  2. Display the root directory, file: LS2/View the current node data and can see the number of updates and other data

  3. Create the file and set the initial content: Create/zk "Test" creates a new Znode node "ZK" and the string associated with it

  4. Get the contents of the file: Get/zk confirm if Znode contains the string we created

5. Modify the contents of the file: Set/zk "Zkbak" sets the string associated with ZK

  6. Delete file: Delete/zk The znode you just created is deleted

  7. Exit the client: Quit

  8. Help command:

2. Topic Registration Information

/brokers/topics/[topic]:

Stores all allocation information for a topic partitions

schema:{    "version": "revision number is currently fixed to number 1",    "Partitions": {        "PartitionID number": [            synchronized Copy group Brokerid list        ],        "PartitionID number": [            Synchronous Copy Group Brokerid list        ],        ...    } example:{    "Version": 1,    "partitions": {"0": [0,1,2]     }}

Such as:

3.partition status information

/BROKERS/TOPICS/[TOPIC]/PARTITIONS/[0...N] WHERE [0..N] represents partition index number

/brokers/topics/[topic]/partitions/[partitionid]/state

schema:{    "Controller_epoch": Represents the number of central controller elections in the Kafka cluster,    "leader": Partition for the leader election Brokerid,    "version ": The version number defaults to 1,    " Leader_epoch ": the partition leader number of elections,    " ISR ": [Synchronous Copy Group Brokerid List]} example:{    " Controller_ Epoch ": Leader":    0,    "version": 1,    "Leader_epoch": 0,    "ISR": [0,1,2]}

 

4. Broker Registration Information

/BROKERS/IDS/[0...N]

Each broker's configuration file needs to specify the ID of a numeric type (globally non-repeatable), which is a temporary znode (ephemeral)

schema:{    "Jmx_port": Jmx port number,    "timestamp": Timestamp of Kafka broker initial boot, "host    ": Host name or IP address,    "version": The version number defaults to 1,    "port": the service port number of the Kafka broker, determined by the parameter port in Server.properties} example:{    "Jmx_port": 1,    "timestamp ":" 1452068227537 ",    " host ":" H1 ",    " version ": 1,    " port ": 9092}

5. Controller Epoch

/controller_epoch-INT (epoch)

This value is a number, and the first broker in the Kafka cluster starts at 1 for the first time, and the new center controller is re-elected whenever the broker in the cluster is changed or hung out. Controller change Controller_epoch value will be + 1;

6. Controller registration Information

/controller, int (broker ID of the controller) stores information about the Kafka broker where the center controllers are located

schema:{    "Version": Release number defaults to 1,    "Brokerid": Broker unique number in Kafka cluster,    "timestamp": Timestamp of Kafka Broker Central Controller Change} example:{    "Version": 1,    "Brokerid": 0,    "timestamp": "1452068227409"}

7. Consumer registration Information

Each consumer has a unique ID (Consumerid can be specified by the configuration file or generated by the system), which is used to mark consumer information.

/consumers/[groupid]/ids/[consumeridstring]

is a temporary znode, the value of this node is see consumeridstring produces a rule, that is, this consumer is currently consumed by the topic + partitions list.

Consumerid generation rule:    stringconsumeruuid = null;        if (config.consumerid!=null && config.consumerid) {            consumeruuid = Consumerid;        } else {            String uuid = Uuid.randomuuid ()            consumeruuid = "%s-%d-%s". Format (                 InetAddress.getLocalHost.getHostName, System.currenttimemillis,                 uuid.getmostsignificantbits (). Tohexstring.substring (0,8));        }   String consumeridstring = Config.groupid + "_" + Consumeruuid;  schema:{    "Version": the release number defaults to 1,    "subscription": {//Subscribe topic list        "topic name": Consumer Number of consumer threads    },    "pattern": "Static", "    timestamp": "Timestamp at consumer startup"} example:{    "version": 1,    "subscription": {        "Replicatedtopic": 1    },    "pattern": "White_list",    "timestamp": "1452134230082"}

8. Consumer Owner

/consumers/[groupid]/owners/[topic]/[partitionid]-consumeridstring + ThreadID index number

When consumer is started, the action that is triggered:

A) First, "Consumer ID registration";

b) then register a watch to listen for the " exit " and " join " of the other Consumer in the current group under the "Consumer ID Registration" node, as long as this znode path below the node list changes,

Will trigger load balancing for the consumer under this group. (such as a consumer failure, then other consumer take over partitions).

c) under the "Broker ID Registration" node, register a watch to monitor the broker's survival, and if the broker list changes, it will trigger all consumer balance under the groups.

9. Consumer Offset

/consumers/[groupid]/offsets/[topic]/[partitionid], long (offset)

Used to track the largest offset in the partition currently consumed by each consumer

This znode is a persistent node, and it can be seen that offset is related to group_id to show that when a consumer in the consumer group (consumer) fails,

Re-trigger balance, other consumer can continue to consume.

Topic Configuration

/config/topics/[topic_name]

The storage structure of Kafka in zookeeper

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.