Docker under Kafka study, trilogy Two: Local Environment build _docker

Source: Internet
Author: User
Tags zookeeper

In the previous chapter "Docker Kafka study, one of the trilogy: the Speed of experience Kafka" we quickly experienced the Kafka message distribution and subscription functions, but the impression of the environment is only the implementation of a few commands and scripts, this chapter we learn how to write these scripts in combat, Build local Kafka environment;

This practice will make Docker mirrors, the materials used here to get: git@github.com:zq2599/docker_kafka.git

The whole environment involves multiple containers, we first list them all, and then comb the relationship between the following figure:

Kafka Sever provides messaging services;
The role of message producer is to generate information about the execution of the subject;
The role of message consumer is to subscribe to messages for the specified subject and consume them. Zookeeper

Zookeeper use stand-alone version, there is no need to customize, so direct use of the official image can be, daocloud.io/library/zookeeper:3.3.6 Kafka Sever

Go to hub.docker.com search Kafka, do not see the image of the official logo, or do it yourself, write dockerfile before preparing two materials: Kafka installation package and launch Kafka shell script;

Kafka installation package is the 2.9.2-0.8.1 version, in
Git@github.com:zq2599/docker_kafka.git, please clone get;

The shell script that starts Kafka server is as follows, very simply, execute script start server in Kafka Bin directory:

#!/bin/bash
$WORK _path/$KAFKA _package_name/bin/kafka-server-start.sh $WORK _path/$KAFKA _package_name/config/ Server.properties

Next you can write Dockerfile, as follows:

# Docker image of Kafka
# VERSION 0.0.1
# author:bolingcavalry

#基础镜像使用tomcat to avoid setting the Java environment from
Daocloud.io/library/tomcat:7.0.77-jre8

#作者
maintainer bolingcavalry <zq2599@gmail.com>

#定义工作目录
env work_path/usr/local/work

#定义kafka文件夹名称
env kafka_package_name kafka_2.9.2-0.8.1

#创建工作目录
RUN mkdir-p $WORK _path

#把启动server的shell复制到工作目录
COPY./start_server.sh $WORK _path/

# Copy the KAFKA compressed file to the working directory
copy./$KAFKA _package_name.tgz $WORK _path/

#解压缩
RUN tar-xvf $WORK _path/$KAFKA _ Package_name.tgz-c $WORK _path/

#删除压缩文件
RUN rm $WORK _path/$KAFKA _package_name.tgz

#执行sed命令修改文件, Change the IP that connects ZK to the Zookeeper container alias
RUN sed-i ' s/zookeeper.connect=localhost:2181/zookeeper.connect=zkhost for the link parameter : 2181/g ' $WORK _path/$KAFKA _package_name/config/server.properties

#给shell赋予执行权限
RUN chmod a+x $WORK _path/ start_server.sh

As the script shows, the operation is not complex, replication decompression Kafka installation package, start the shell script, and then change the configuration file zookeeper IP to link zookeeper alias;

Once the Dockerfile is written, the kafka_2.9.2-0.8.1.tgz and start_server.sh are placed in the same directory, with the console executing in this directory:

Docker build-t bolingcavalry/kafka:0.0.1.

After the mirror build succeeds, create a new directory to write the Docker-compose.yml script as follows:

Version: ' 2 '
services:
  zk_server: 
    image:daocloud.io/library/zookeeper:3.3.6
    restart:always
  Kafka_server: 
    image:bolingcavalry/kafka:0.0.1
    Links: 
      -zk_server:zkhost
    command:/bin/sh-c '/usr/ Local/work/start_server.sh '
    restart:always
  message_producer: 
    image:bolingcavalry/kafka:0.0.1
    Links: 
      -zk_server:zkhost
      -kafka_server:kafkahost
    restart:always
  message_consumer: 
    image:bolingcavalry/kafka:0.0.1
    Links: 
      -zk_server:zkhost
    restart:always

Four containers are configured in the DOCKER-COMPOSE.YML:
1. The zookeeper is official;
2. The other three are generated by the Bolingcavalry/kafka of the newly produced image;
3. The Kafka_server executed the start_server.sh script to start up the service;
4. Message_producer and Message_consumer simply installed the Kafka environment so that they could send or subscribe to messages via the command line, but the containers themselves did not start the server;
5. Kafka_server,message_producer,message_consumer are connected to the Zookeeper container via the link parameter, and Message_producer is also connected to Kafka server, Because the IP address of Kafka server is used when sending messages;

Now open the terminal, in the DOCKER-COMPOSE.YML directory to execute Docker-compose up-d, you can start all containers;

At this point, the local environment has been successfully built, we can experience Kafka through the command line to publish subscription services, specific commands can refer to the previous chapter "Docker Kafka Learning, one of the trilogy: the Speed of experience Kafka."

The above is the whole process of local build Kafka, the next chapter we develop Java application to experience Kafka message Publishing subscription service.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.