Kubernetes Deploying Kafka Clusters

Source: Internet
Author: User
Tags zookeeper

The main references are Https://stackoverflow.com/questions/44651219/kafka-deployment-on-minikube and https://github.com/ramhiser/. Kafka-kubernetes two projects, but these two projects are single-node Kafka, I'm trying to expand the single-node Kafka to a multi-node Kafka cluster.

One, single node Kafka

To build a Kafka cluster, start with a single node.

1. Create Zookeeper Services Zookeeper-svc.yaml and Zookeeper-deployment.yaml, created with Kubectl create-f:

ApiVersion:v1kind:Servicemetadata:  Labels:    app:zookeeper-service  name:zookeeper-servicespec:  Ports:  -Name:zookeeper-port    port:2181    targetport:2181  selector:    app:zookeeper

Apiversion:extensions/v1beta1kind:deploymentmetadata:  Labels:    app:zookeeper  Name:zookeeperspec:  replicas:1  Template:    metadata:      Labels:        app:zookeeper    Spec:      containers:      - Image:wurstmeister/zookeeper        imagepullpolicy:ifnotpresent        name:zookeeper        ports:        - containerport:2181

2. When the pod runs, the service's endpoint configuration is successful, and you can continue to create Kafka Kafka-svc.yaml and Kafka-deployment.yaml:

ApiVersion:v1kind:Servicemetadata:  name:kafka-service  Labels:    app:kafkaspec:  type:nodeport  ports:  -port:9092    name:kafka-port    targetport:9092    nodeport:30092    protocol:tcp  selector:    App:kafka

Kind:deploymentapiversion:extensions/v1beta1metadata:  name:kafka-deploymentspec:  replicas:1  Selector:    matchlabels:      name:kafka  Template:    metadata: Labels      :        Name:kafka App:kafka    Spec:      containers:      -Name:kafka        image:wurstmeister/kafka        imagepullpolicy:ifnotpresent        ports:        -containerport:9092        env:        -name:kafka_advertised_port          value: "9092"        -Name: Kafka_advertised_host_name          Value: "[Clusterip of KAFKA Service]"        -name:kafka_zookeeper_connect          Value: [Zookeeper service clusterip]:2181        -name:kafka_broker_id          value: "1"

Clusterip is viewed by Kubectl get Svc. The value of Kafka_zookeeper_connect can also be changed to zookeeper-service:2181.

3. Once created, the service needs to be tested. The method of 78309050 is referenced.

Prior to this, for virtualized Kafka, you would first need to execute the following command to enter the container:

Kubectl exec-it [Kafka's pod name]/bin/bash

After entering the container, the Kafka command is stored in the Opt/kafka/bin directory and entered with the CD command:

CD Opt/kafka/bin

The following action is similar to what is written in the above blog. For a single-node Kafka, the same node needs to be a producer and a consumer. The following commands are executed:

Kafka-console-producer.sh--broker-list [Kafka's service clusterip]:9092--topic test

If it works properly, a > tag will appear below to prompt for the message. This terminal becomes the producer.

Alternatively, open a Linux terminal and execute the same command to enter the container. This time the terminal as a consumer. Note that the method of creating a consumer as written in the above blog has changed in the new version of Kafka and needs to execute the following command:

kafka-console-consumer.sh--bootstrap-server [Kafka service clusterip]:9092--topic test--from-beginning

After that, enter the information in the producer to see if the consumer is able to receive it. If received, the description runs successfully.

Finally, you can also execute the following command to test the list of all message topics:

kafka-topics.sh--list--zookeeper [Zookeeper service clusterip]:2181

Note that sometimes it is necessary to use Kafka port, sometimes need to use zookeeper port, should pay attention to distinguish.

Two, multi-node Kafka cluster

Once the single-node service runs successfully, you can attempt to increase the Kafka node to establish the cluster. My kubernetes cluster consists of 3 nodes, so the Kafka cluster I build also contains 3 nodes, running on three machines, respectively.

1. Build Zookeeper Cluster

Create the Zookeeper Yaml file Zookeeper-svc2.yaml and Zookeeper-deployment2.yaml as follows:

ApiVersion:v1kind:Servicemetadata:  name:zoo1  Labels:    app:zookeeper-1spec:  ports:  -Name: Client    port:2181    protocol:tcp  -name:follower    port:2888    protocol:tcp  -Name:leader    port:3888    protocol:tcp  selector:    app:zookeeper-1---apiVersion:v1kind:Servicemetadata:  name:zoo2  Labels:    app:zookeeper-2spec:  ports:  -name:client    port:2181    Protocol:tcp  -name:follower    port:2888    protocol:tcp  -name:leader    port:3888    Protocol:tcp  selector:    app:zookeeper-2---apiVersion:v1kind:Servicemetadata:  name:zoo3  Labels:    app:zookeeper-3spec:  ports:  -name:client    port:2181    protocol:tcp  -Name: Follower    port:2888    protocol:tcp  -name:leader    port:3888    protocol:tcp  selector:    app:zookeeper-3

Kind:deploymentapiversion:extensions/v1beta1metadata:name:zookeeper-deployment-1spec:replicas:1 SELECTOR:MATC        Hlabels:app:zookeeper-1 name:zookeeper-1 template:metadata:labels:app:zookeeper-1 Name:zookeeper-1 spec:containers:-Name:zoo1 image:digitalwonderland/zookeeper Imagepullpo        Licy:ifnotpresent Ports:-containerport:2181 env:-name:zookeeper_id value: "1" -Name:zookeeper_server_1 value:zoo1-name:zookeeper_server_2 Value:zoo2-nam E:zookeeper_server_3 Value:zoo3---Kind:deploymentapiversion:extensions/v1beta1metadata:name:zookeeper-depl       Oyment-2spec:replicas:1 selector:matchlabels:app:zookeeper-2 name:zookeeper-2 Template:metadata: Labels:app:zookeeper-2 name:zookeeper-2 spec:containers:-Name:zoo2 image:d Igitalwonderland/zookeepER imagepullpolicy:ifnotpresent ports:-containerport:2181 env:-name:zookeeper_id Value: "2"-name:zookeeper_server_1 value:zoo1-name:zookeeper_server_2 Val  Ue:zoo2-name:zookeeper_server_3 Value:zoo3---kind:deploymentapiversion:extensions/v1beta1metadata: Name:zookeeper-deployment-3spec:replicas:1 selector:matchlabels:app:zookeeper-3 name:zookeeper-3 T  Emplate:metadata:labels:app:zookeeper-3 name:zookeeper-3 spec:containers:-Name:  Zoo3 Image:digitalwonderland/zookeeper imagepullpolicy:ifnotpresent Ports:-Containerport:        2181 env:-name:zookeeper_id Value: "3"-name:zookeeper_server_1 value:zoo1 -Name:zookeeper_server_2 value:zoo2-name:zookeeper_server_3 Value:zoo3

This creates 3 deployment and 3 service, one for each. In this way, three instances are available for external service.

Once created, you need to use KUBECTL logs to look at the logs of three zookeeper pods, make sure no errors occur, and in the 3-node log, there is a statement like the following, indicating that the zookeeper cluster has successfully built.


LEADER election TOOK-2613

2. Build Kafka Cluster

Likewise create 3 deployment and 3 service, write Kafka-svc2.yaml and Kafka-deployment2.yaml as follows:

ApiVersion:v1kind:Servicemetadata:  name:kafka-service-1  Labels:    app:kafka-service-1spec:  type: Nodeport  Ports:  -port:9092    name:kafka-service-1    targetport:9092    nodeport:30901    Protocol:tcp  selector:    app:kafka-service-1---apiVersion:v1kind:Servicemetadata:  name: Kafka-service-2  Labels:    app:kafka-service-2spec:  type:nodeport  ports:  -port:9092    name:kafka-service-2    targetport:9092    nodeport:30902    protocol:tcp  selector:    app: Kafka-service-2---apiVersion:v1kind:Servicemetadata:  name:kafka-service-3  Labels:    app: Kafka-service-3spec:  type:nodeport  ports:  -port:9092    name:kafka-service-3    targetport: 9092    nodeport:30903    protocol:tcp  selector:    app:kafka-service-3

Kind:deploymentapiversion:extensions/v1beta1metadata:name:kafka-deployment-1spec:replicas:1 Selector:matchlab Els:name:kafka-service-1 template:metadata:labels:name:kafka-service-1 App:kafka-servic        E-1 spec:containers:-Name:kafka-1 Image:wurstmeister/kafka imagepullpolicy:ifnotpresent  Ports:-containerport:9092 env:-Name:kafka_advertised_port value: "9092"-          Name:kafka_advertised_host_name value: [Kafka-service1 's Clusterip]-name:kafka_zookeeper_connect value:zoo1:2181,zoo2:2181,zoo3:2181-name:kafka_broker_id value: "1"-name:kafka_create_top  ICS value:mytopic:2:1---Kind:deploymentapiversion:extensions/v1beta1metadata:name:kafka-deployment-2spec: Replicas:1 selector:selector:matchlabels:name:kafka-service-2 Template:metadata:labels:n    Ame:kafka-service-2    App:kafka-service-2 spec:containers:-name:kafka-2 Image:wurstmeister/kafka IMAGEPULLP Olicy:ifnotpresent Ports:-containerport:9092 env:-Name:kafka_advertised_port V Alue: "9092"-name:kafka_advertised_host_name value: [Kafka-service2 Clusterip]-Name:kafka_zoo Keeper_connect value:zoo1:2181,zoo2:2181,zoo3:2181-name:kafka_broker_id Value: "2"---kind:de Ploymentapiversion:extensions/v1beta1metadata:name:kafka-deployment-3spec:replicas:1 Selector:selector:match Labels:name:kafka-service-3 template:metadata:labels:name:kafka-service-3 App:kafka-ser Vice-3 spec:containers:-name:kafka-3 Image:wurstmeister/kafka Imagepullpolicy:ifnotpresen        T ports:-containerport:9092 env:-Name:kafka_advertised_port value: "9092" -Name:kafka_advertisEd_host_name value: [Kafka-service3 Clusterip]-Name:kafka_zookeeper_connect value:zoo1:2181,z OO2:2181,ZOO3:2181-NAME:KAFKA_BROKER_ID Value: "3"

An operation to create a new topic was performed in Deployment1.

3. Testing

The test method is basically the same as the single cluster situation, here does not repeat. The difference is that this time the different nodes can be used as producers and consumers.

At this point, Kubernetes's Kafka cluster build is done!

Kubernetes Deploying Kafka Clusters

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.