Kubernetes部署Kafka叢集

來源:互聯網
上載者:User

標籤:ica   cd命令   機器   from   kafka   server   pre   編寫   需要   

主要參考了https://stackoverflow.com/questions/44651219/kafka-deployment-on-minikube和https://github.com/ramhiser/kafka-kubernetes兩個項目,但是這兩個項目都是單節點的Kafka,我這裡嘗試將單節點Kafka擴充為多節點的Kafka叢集。

一、單節點Kafka

要搭建Kafka叢集,還是要從單節點開始。

1.建立Zookeeper服務zookeeper-svc.yaml和zookeeper-deployment.yaml,用kubectl create -f建立:

apiVersion: v1kind: Servicemetadata:  labels:    app: zookeeper-service  name: zookeeper-servicespec:  ports:  - name: zookeeper-port    port: 2181    targetPort: 2181  selector:    app: zookeeper

 

apiVersion: extensions/v1beta1kind: Deploymentmetadata:  labels:    app: zookeeper  name: zookeeperspec:  replicas: 1  template:    metadata:      labels:        app: zookeeper    spec:      containers:      - image: wurstmeister/zookeeper        imagePullPolicy: IfNotPresent        name: zookeeper        ports:        - containerPort: 2181

2.等pod跑起來,service的endpoint配置成功後,就可以繼續建立kafka的kafka-svc.yaml和kafka-deployment.yaml了:

apiVersion: v1kind: Servicemetadata:  name: kafka-service  labels:    app: kafkaspec:  type: NodePort  ports:  - port: 9092    name: kafka-port    targetPort: 9092    nodePort: 30092    protocol: TCP  selector:    app: kafka

 

kind: DeploymentapiVersion: extensions/v1beta1metadata:  name: kafka-deploymentspec:  replicas: 1  selector:    matchLabels:      name: kafka  template:    metadata:      labels:        name: kafka        app: kafka    spec:      containers:      - name: kafka        image: wurstmeister/kafka        imagePullPolicy: IfNotPresent        ports:        - containerPort: 9092        env:        - name: KAFKA_ADVERTISED_PORT          value: "9092"        - name: KAFKA_ADVERTISED_HOST_NAME          value: "[kafka的service的clusterIP]"        - name: KAFKA_ZOOKEEPER_CONNECT          value: [zookeeper的service的clusterIP]:2181        - name: KAFKA_BROKER_ID          value: "1"

clusterIP通過kubectl get svc進行查看。KAFKA_ZOOKEEPER_CONNECT的值也可以改為zookeeper-service:2181。

3.建立後,需要對服務進行測試。參考了78309050的方法。

在此之前,針對虛擬化的Kafka,需要先執行下面的命令以進入容器:

kubectl exec -it [Kafka的pod名稱] /bin/bash

進入容器後,Kafka的命令儲存在opt/kafka/bin目錄下,用cd命令進入:

cd opt/kafka/bin

後面的操作就跟上面的部落格中寫的類似了。針對單節點Kafka,需要將同一個節點作為生產者和消費者。執行命令如下:

kafka-console-producer.sh --broker-list [kafka的service的clusterIP]:9092 --topic test

運行正常的話,下方會出現>標記以提示輸入訊息。這樣這個終端就成為了生產者。

另外開啟一個linux終端,執行相同的命令進入容器。這次將這個終端作為消費者。注意,上面的部落格中寫的建立消費者的方法在新版的Kafka中已經改變,需要執行下面的命令:

kafka-console-consumer.sh --bootstrap-server [kafka的service的clusterIP]:9092 --topic test --from-beginning

之後,在生產者輸入資訊,查看消費者是否能夠接收到。如果接收到,說明運行成功。

最後,還可以執行下面的命令以測試列出所有的訊息主題:

kafka-topics.sh --list --zookeeper [zookeeper的service的clusterIP]:2181

注意,有時需要用Kafka的連接埠,有時需要用Zookeeper的連接埠,應注意區分。

二、多節點Kafka叢集

單節點服務運行成功後,就可以嘗試增加Kafka的節點以建立叢集。我的Kubernetes叢集包含3個節點,所以我搭建的Kafka叢集也包含3個節點,分別運行在三台機器上。

1.搭建Zookeeper叢集

建立zookeeper的yaml檔案zookeeper-svc2.yaml和zookeeper-deployment2.yaml如下:

apiVersion: v1kind: Servicemetadata:  name: zoo1  labels:    app: zookeeper-1spec:  ports:  - name: client    port: 2181    protocol: TCP  - name: follower    port: 2888    protocol: TCP  - name: leader    port: 3888    protocol: TCP  selector:    app: zookeeper-1---apiVersion: v1kind: Servicemetadata:  name: zoo2  labels:    app: zookeeper-2spec:  ports:  - name: client    port: 2181    protocol: TCP  - name: follower    port: 2888    protocol: TCP  - name: leader    port: 3888    protocol: TCP  selector:    app: zookeeper-2---apiVersion: v1kind: Servicemetadata:  name: zoo3  labels:    app: zookeeper-3spec:  ports:  - name: client    port: 2181    protocol: TCP  - name: follower    port: 2888    protocol: TCP  - name: leader    port: 3888    protocol: TCP  selector:    app: zookeeper-3

 

kind: DeploymentapiVersion: extensions/v1beta1metadata:  name: zookeeper-deployment-1spec:  replicas: 1  selector:    matchLabels:      app: zookeeper-1      name: zookeeper-1  template:    metadata:      labels:        app: zookeeper-1        name: zookeeper-1    spec:      containers:      - name: zoo1        image: digitalwonderland/zookeeper        imagePullPolicy: IfNotPresent        ports:        - containerPort: 2181        env:        - name: ZOOKEEPER_ID          value: "1"        - name: ZOOKEEPER_SERVER_1          value: zoo1        - name: ZOOKEEPER_SERVER_2          value: zoo2        - name: ZOOKEEPER_SERVER_3          value: zoo3---kind: DeploymentapiVersion: extensions/v1beta1metadata:  name: zookeeper-deployment-2spec:  replicas: 1  selector:    matchLabels:      app: zookeeper-2      name: zookeeper-2  template:    metadata:      labels:        app: zookeeper-2        name: zookeeper-2    spec:      containers:      - name: zoo2        image: digitalwonderland/zookeeper        imagePullPolicy: IfNotPresent        ports:        - containerPort: 2181        env:        - name: ZOOKEEPER_ID          value: "2"        - name: ZOOKEEPER_SERVER_1          value: zoo1        - name: ZOOKEEPER_SERVER_2          value: zoo2        - name: ZOOKEEPER_SERVER_3          value: zoo3---kind: DeploymentapiVersion: extensions/v1beta1metadata:  name: zookeeper-deployment-3spec:  replicas: 1  selector:    matchLabels:      app: zookeeper-3      name: zookeeper-3  template:    metadata:      labels:        app: zookeeper-3        name: zookeeper-3    spec:      containers:      - name: zoo3        image: digitalwonderland/zookeeper        imagePullPolicy: IfNotPresent        ports:        - containerPort: 2181        env:        - name: ZOOKEEPER_ID          value: "3"        - name: ZOOKEEPER_SERVER_1          value: zoo1        - name: ZOOKEEPER_SERVER_2          value: zoo2        - name: ZOOKEEPER_SERVER_3          value: zoo3

這裡建立了3個deployment和3個service,一一對應。這樣,三個執行個體都可以對外提供服務。

建立完成後,需要用kubectl logs查看一下三個Zookeeper的pod的日誌,確保沒有錯誤發生,並且在3個節點的日誌中,有類似下面的語句,則表明Zookeeper叢集已順利搭建成功。

2016-10-06 14:04:05,904 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:[email protected]] - LEADING - 
LEADER ELECTION TOOK - 2613

 

2.搭建Kafka叢集

同樣建立3個deployment和3個service,編寫kafka-svc2.yaml和kafka-deployment2.yaml如下:

apiVersion: v1kind: Servicemetadata:  name: kafka-service-1  labels:    app: kafka-service-1spec:  type: NodePort  ports:  - port: 9092    name: kafka-service-1    targetPort: 9092    nodePort: 30901    protocol: TCP  selector:    app: kafka-service-1---apiVersion: v1kind: Servicemetadata:  name: kafka-service-2  labels:    app: kafka-service-2spec:  type: NodePort  ports:  - port: 9092    name: kafka-service-2    targetPort: 9092    nodePort: 30902    protocol: TCP  selector:    app: kafka-service-2---apiVersion: v1kind: Servicemetadata:  name: kafka-service-3  labels:    app: kafka-service-3spec:  type: NodePort  ports:  - port: 9092    name: kafka-service-3    targetPort: 9092    nodePort: 30903    protocol: TCP  selector:    app: kafka-service-3

 

kind: DeploymentapiVersion: extensions/v1beta1metadata:  name: kafka-deployment-1spec:  replicas: 1  selector:    matchLabels:      name: kafka-service-1  template:    metadata:      labels:        name: kafka-service-1        app: kafka-service-1    spec:      containers:      - name: kafka-1        image: wurstmeister/kafka        imagePullPolicy: IfNotPresent        ports:        - containerPort: 9092        env:        - name: KAFKA_ADVERTISED_PORT          value: "9092"        - name: KAFKA_ADVERTISED_HOST_NAME          value: [kafka-service1的clusterIP]        - name: KAFKA_ZOOKEEPER_CONNECT          value: zoo1:2181,zoo2:2181,zoo3:2181        - name: KAFKA_BROKER_ID          value: "1"        - name: KAFKA_CREATE_TOPICS          value: mytopic:2:1---kind: DeploymentapiVersion: extensions/v1beta1metadata:  name: kafka-deployment-2spec:  replicas: 1  selector:  selector:    matchLabels:      name: kafka-service-2  template:    metadata:      labels:        name: kafka-service-2        app: kafka-service-2    spec:      containers:      - name: kafka-2        image: wurstmeister/kafka        imagePullPolicy: IfNotPresent        ports:        - containerPort: 9092        env:        - name: KAFKA_ADVERTISED_PORT          value: "9092"        - name: KAFKA_ADVERTISED_HOST_NAME          value: [kafka-service2的clusterIP]        - name: KAFKA_ZOOKEEPER_CONNECT          value: zoo1:2181,zoo2:2181,zoo3:2181        - name: KAFKA_BROKER_ID          value: "2"---kind: DeploymentapiVersion: extensions/v1beta1metadata:  name: kafka-deployment-3spec:  replicas: 1  selector:  selector:    matchLabels:      name: kafka-service-3  template:    metadata:      labels:        name: kafka-service-3        app: kafka-service-3    spec:      containers:      - name: kafka-3        image: wurstmeister/kafka        imagePullPolicy: IfNotPresent        ports:        - containerPort: 9092        env:        - name: KAFKA_ADVERTISED_PORT          value: "9092"        - name: KAFKA_ADVERTISED_HOST_NAME          value: [kafka-service3的clusterIP]        - name: KAFKA_ZOOKEEPER_CONNECT          value: zoo1:2181,zoo2:2181,zoo3:2181        - name: KAFKA_BROKER_ID          value: "3"

在deployment1中執行了建立一個新topic的操作。

3.測試

測試方法基本同單叢集的情況,這裡就不贅述了。不同的是,這次可以將不同的節點作為生產者和消費者。

 

至此,Kubernetes的Kafka叢集搭建就大功告成了!

Kubernetes部署Kafka叢集

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.