Kafka configures SASL authentication and permission fulfillment documentation

Source: Internet
Author: User
Keywords SASL authentication Kafka configuration kafka tutorial kafka foundation
Tags access address apache api authentication based certification check

Kafka configures SASL authentication and permission fulfillment documentation.

First, the release notes This example uses: zookeeper-3.4.10, kafka_2.11- zookeeper version no requirements, kafka must use version 0.8 or later.

Second, zookeeper configuration SASLzookeeper cluster or single node configuration the same. Specific steps are as follows:

1, zoo.cfg file configuration Add the following configuration:

authProvider.1 = org.apache.zookeeper.server.auth.SASLAuthenticationProvider

requireClientAuthScheme = sasl

jaasLoginRenew = 3600000

2, write JAAS file This file defines the need to link to the Zookeeper server user name and password.

JAAS configuration section defaults to Server:

Server {

org.apache.kafka.common.security.plain.PlainLoginModule required

username = "admin"

password = "admin-secret"

user_kafka = "kafka-sec"

user_producer = "prod-sec";


The configuration file I named zk_server_jaas.conf, and placed in the deployment directory /usr/zookeeper/zookeeper-3.4.10/conf/. Document defines the identity authentication class (org.apache.kafka.common.security.plain.PlainLoginModule), you can see this certification class is kafka namespace, which is the need to join the kafka plug-in, so the next step is very important. This document defines two users, one is kafka, a producer (user_ can define multiple users, the value is equal to the user password), these user_ configuration out of the user can provide to the producer program and consumption Program certification use. There are also two attributes, username and password, where username is the username for internal authentication between ZooKeeper nodes and password is the corresponding password.

3, add to the zookeeper Kafka certification plug-in Zookeeper certification mechanism is to use the plug-in, this plug-in can support JAAS. Kafka needs to link to Zookeeper and use Kafka's authentication plugin directly. This plugin class is also included in kafka-clients (Maven project). Will depend on a few jar added ZooKeeper started classpath. The following is kafka-clients- related jar, may your version and this is not the same, as long as the same name can be, jar in kafka lib to find, do not need to download the Internet. Including its dependencies:






My approach is more direct, in the Zookeeper deployment root directory to create a path for_sasl, all the above jar files copied to this path, and then modify the bin / zkEnv.sh configuration file, this file is responsible for loading some of the environment variables related to starting Zookeeper ,Input parameters.

for i in "$ ZOOBINDIR" /../ for_sasl / *. jar;



SERVER_JVMFLAGS = "-Djava.security.auth.login.config = $ ZOOCFGDIR / zk_server_jaas.conf"

The logic is relatively simple, all jar files for_sasl directory append to the CLASSPATH variable, and then set a JVM parameter to SERVER_JVMFLAGS variables, these two variables will be passed to the JVM Zookeeper startup. Specific can view the script source.

4, configure the other nodes according to 1 to 3 steps to configure the remaining zookeeper node.

5, start all nodes Open the Quorum process of all zookeeper nodes: bin / zkServer.sh start, check the zookeeper log to see if all the nodes can run steadily. Then try bin / zkCli.sh to link all the nodes. Can all pass.

Third, Kafka cluster configuration SASL Make sure the above zookeeper configuration is successful, begin to configure Kafka. All Kafka nodes are also peer-to-peer, so the configuration for the following steps is the same on all nodes.

1, create a JAAS configuration file Kafka Broker link required to define the user name and password and broker nodes communicate with each other user name password, the configuration is defined in the KafkaServer section, the following documents:

KafkaServer {

org.apache.kafka.common.security.plain.PlainLoginModule required

username = "admin"

password = "admin-sec"

user_admin = "admin-sec"

user_producer = "prod-sec"

user_consumer = "cons-sec";


Client {

org.apache.kafka.common.security.plain.PlainLoginModule required

username = "kafka"

password = "kafka-sec";


The configuration file is named: kafka_server_jaas.conf, placed in /usr/kafka/kafka_2.11- Under the other Client Configuration section is mainly configured broker to Zookeeper link user name password. First explain KafkaServer user_ to define multiple users for client program (producer, consumer program) certification, you can define more than one, follow-up configuration may also be based on different user-defined ACL, this part of the content beyond this article range. This is my current understanding of the configuration. The above example I defined three users, one is admin, a producer, a consumer, followed by the corresponding user password (such as user_producer user name producer, password prod-sec user). Then select a user for communication between various brokers inside Kafka, where I select admin user, the corresponding password is admin-sec. Client configuration section is much easier to understand, mainly broker link to zookeeper, select a user from the above Zookeeper JAAS file, fill in the user name and password can be.

2, configure server.properties

listeners = SASL_PLAINTEXT: // vubuntuez1: 9092

security.inter.broker.protocol = SASL_PLAINTEXT

sasl.enabled.mechanisms = PLAIN

sasl.mechanism.inter.broker.protocol = PLAIN

authorizer.class.name = kafka.security.auth.SimpleAclAuthorizer

allow.everyone.if.no.acl.found = true

Listen here listeners configuration items, the host name part (in this case the host name is vubuntuez1) replaced by the current node's host name. The other nodes in the same configuration. Note that allow.everyone.if.no.acl.found this configuration item defaults to false, if not configured to true, subsequent producers, consumers can not use Kafka. Either configured as true, so each authentication user must configure the ACL, too much trouble, I set directly to true, when not found ACL configuration, allowing all access operations.

3, configure the KAFKA_OPTS environment variable and pseudo-distributed node configuration KAFKA_OPTS variable the same, mainly to pass the JVM variable java.security.auth.login.config, specify the JAAS configuration file. Then kafka-run-class.sh will be responsible for passing this KAFKA_OPTS variable to the JVM. So start the kafka broker command can be written in the same script as this set of variables:

#! / bin / bashROOT = `dirname $ 0`

export KAFKA_OPTS = "-Djava.security.auth.login.config = $ ROOT / config / kafka_server_jaas.conf"

$ ROOT / bin / kafka-server-start.sh -daemon $ ROOT / config / server.properties

4, configure the other nodes Configure the remaining kafka broker node, pay attention to server.properties listeners configuration items, mentioned above.

5, start all kafka nodes After the completion of the inspection log, see if it can be stable operation, not afraid to throw an exception. After normal operation continue the following operation. Note that once Kafka authentication is configured, kafka's own console-producer and console-consumer can not be used.

Fourth, a simple Producer to write the following All except part of the parameter values ​​?? (such as password), the other with the pseudo-distributed node configuration SASL producer, consumer code modification process. After 0.9 kafka version of the producer API is no problem, you can use, before the 0.9 producer API seemingly does not support security, the official view of the specific documents, more specific I do not understand. After kafka 0.10, the producer code I do not write, the key to talk about the need for additional configuration operations.

1, add configuration JAVA program KafkaProducer initialization need to pass Properties, Properties instance (assuming props) to add the following configuration items:

props.setProperty ("security.protocol", "SASL_PLAINTEXT");

props.setProperty ("sasl.mechanism", "PLAIN");

2, write JAAS configuration file Here we use kafka_client_jaas.conf, this file is to configure the client's user name password file, configured to java. Such as /conf/kafka_client_jaas.conf. Start the producer or consumer to add the appropriate profile.

KafkaClient {org.apache.kafka.common.security.plain.PlainLoginModule required

username = "producer"

password = "prod-sec";


Configuration section KafkaClient, the default is this, the name can be modified, self-inspection official documents. We use alice user. The higher version of kafka, its producers, consumers only need a direct link kafka broker, so do not configure zookeeper.

3, specify the JAAS configuration file empathy, passed to the JVM variable java.security.auth.login.config. Kafka client component will call System.getProperty () JDK function to get to the JAAS configuration file path, so we use System.setProperty () directly to set this variable to create the above kafka_client_jaas.conf file location can be.

For example: System.setProperty ("java.security.auth.login.config", "" + "fsPath +" \\ conf \\ kafka_client_jaas.conf "); / / Add environment variables, you need to enter the configuration file path

4, start the manufacturer's proposal to set a log4j configuration file, and set the log level DEBUG, so you can see whether the normal write data.

5, such as: producer in this case consumer consumer profile in this case Producer sample code:

package com.howtoprogram.kafka;

import java.util.Properties;

import org.apache.kafka.clients.producer.KafkaProducer;

import org.apache.kafka.clients.producer.Producer;

import org.apache.kafka.clients.producer.ProducerRecord;

public class ProducerTest {

public static void main (String [] args) {

String fsPath = System.getProperty ("user.dir");

System.setProperty ("java.security.auth.login.config", "" + "fsPath +" \\ conf \\ prod_client_jaas.conf "); / / Add environment variables, the need to enter the configuration file path System.out.println ( "=================== Configuration File Address" + fsPath + "\\ conf \\ prod_client_jaas.conf");

Properties props = new Properties ();

props.put ("bootstrap.servers", "");

props.put ("acks", "1");

props.put ("retries", 0);

props.put ("batch.size", 16384);

props.put ("linger.ms", 1);

props.put ("buffer.memory", 33554432);

props.put ("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");

props.put ("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

props.setProperty ("security.protocol", "SASL_PLAINTEXT");

props.setProperty ("sasl.mechanism", "PLAIN");

Producer producer = null;

try {

producer = new KafkaProducer <> (props);

for (int i = 0; i <100; i ++) {

String msg = "Message" + i; producer.send (new ProducerRecord ("HelloKafkaTopic", msg));

System.out.println ("Sent:" + msg);


} catch (Exception e) {

e.printStackTrace ();

} finally {

producer.close ();



Consumer Sample Code:

package com.howtoprogram.kafka;

import java.util.Arrays; import java.util.Properties;

import org.apache.kafka.clients.consumer.ConsumerRecord;

import org.apache.kafka.clients.consumer.ConsumerRecords;

import org.apache.kafka.clients.consumer.KafkaConsumer;

public class ConsumerTest {public static void main (String [] args) {

String fsPath = System.getProperty ("user.dir");

System.setProperty ("java.security.auth.login.config", "" + "fsPath +" \\ conf \\ cons_client_jaas.conf); / / Add environment variables, you need to enter the configuration file path System.out.println ( "=================== Configuration File Address" + fsPath + "\\ conf \\ cons_client_jaas.conf");

Properties props = new Properties ();

props.put ("bootstrap.servers", "");

props.put ("group.id", "group-1");

props.put ("enable.auto.commit", "true");

props.put ("auto.commit.interval.ms", "1000");

props.put ("auto.offset.reset", "earliest");

props.put ("session.timeout.ms", "30000");

props.put ("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

props.put ("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

props.setProperty ("security.protocol", "SASL_PLAINTEXT");

props.setProperty ("sasl.mechanism", "PLAIN");

KafkaConsumer kafkaConsumer = new KafkaConsumer <> (props);

kafkaConsumer.subscribe (Arrays.asList ("HelloKafkaTopic"));

while (true) {

ConsumerRecords records = kafkaConsumer.poll (100);

for (ConsumerRecord record: records) {

System.out.println ("Partition:" + record.partition () + "Offset:" + record.offset () + "Value:" + record.value () + "ThreadID:" + Thread.currentThread (). getId ());





Five: Problems May Encounter The console does not complain when the program starts, but did not respond for a long time. First check the linux firewall

Execute: service iptables stop; chkconfig iptables off; Check program client file name and password configuration is correct.

Necessity of Kafka Security Configuration: Kafka is a high-throughput, distributed, subscription-based messaging system designed by LinkedIn and written in Scala with scalability, reliability, asynchronous communication and high throughput And other features are widely used. At present, more and more open source distributed processing systems support integration with Kafka. Among them, Spark Streaming, as a backend flow engine and Kafka as a front-end message system, is becoming one of the mainstream architectures of the current stream processing system. However, more and more security breaches, data leaks and other issues are now emerging and security is becoming a problem that system selection has to consider. Due to the lack of security mechanisms, Kafka has also caused its deployment in data-sensitive industries to have serious Security risks. This article will focus on Kafka, first introduced its overall architecture and key concepts, and then in-depth analysis of the security issues in its architecture, and finally share Transwarp in Kafka security work done and its use. Kafka Architecture and Security First, let's take a look at a few basic concepts about Kafka: Topic: Kafka divides incoming messages into categories, each of which is called Topic, identified by a unique Topic Name. Producer: The process of posting a message to a topic is called Producer. Consumer: The process of subscribing to a message from Topic is called Consumer. Broker: Kafka cluster contains one or more servers, this server is called Broker. The overall architecture of Kafka is shown in the following figure. A typical Kafka cluster consists of a set of Producer posts, a set of Broker to manage Topics, and a set of Consumer subscribed to messages. A Topic can have multiple partitions, with each partition stored only on one Broker. Producer can divide the message into specific partitions according to a certain strategy, such as simply polling each partition or specifying a partition according to the hash value of a specific field. Broker needs to record all the brokers in the cluster through ZooKeeper, select the partition's Leader, record the offset of Consumer's consumption message, and relalance when the Consumer Group changes. The Broker receives and sends the message is passive: the producer sends the message initiatively, Consumer take the initiative to pull the message. However, analyzing the Kafka framework reveals the following serious security issues: 1. Any host in the network can join a Kafka cluster by starting a Broker process, receive Producer messages, tamper messages and send it to the Consumer. 2. Any host on the network can launch a malicious Producer / Consumer connection to Broker, send illegal messages, or pull private message data. Broker does not support connecting to a Kerberos-enabled ZooKeeper cluster without setting permissions on the data stored on ZooKeeper. Any user can directly access the ZooKeeper cluster and modify or delete these data. 4. Topic in Kafka does not support setting access control list. Any Consumer (or Producer) connected to Kafka cluster can read (or send) messages to any topic. As Kafka becomes more widely used, especially in areas with high data privacy (such as video surveillance of road traffic), the above security issue exists as a time bomb. Once the intranet is hacked or malicious inside Users, all of the privacy data (such as vehicle travel records) can easily be stolen without having to break the server where Broker is located. Kafka Security Design Based on the above analysis, Transwarp enhances Kafka's security in two ways: Authentication: Two authentication mechanisms based on Kerberos and IP are designed and implemented. The former is strong authentication and has better security than the latter. The latter is suitable for a network environment with a trusted IP address and is easier to deploy than the former. Authorization: Designed and implemented a topic-level permission model. Topic permissions are divided into READ (from the Topic pull data), WRITE (to the production data in the Topic), CREATE (create Topic) and DELETE (delete Topic). Kerberos-based identity mechanism as shown below: Broker startup, the need to use the identity and key file in the configuration file to the KDC (Kerberos server) authentication, the authentication through the Kafka cluster, or an error exit. The Producer (or Consumer) needs to go through the following steps to establish a secure Socket connection with the Broker: 1. Producer authenticates to the KDC and obtains the TGT (Ticket Request Ticket) if it is started, otherwise an error exit. 2. Producer uses the TGT to request Kafka service from the KDC , The KDC validates the TGT and returns a SessionKey and ServiceTicket to the Producer 3. The Producer uses the SessionKey and ServiceTicket to establish a connection with the Broker, Broker decrypts the ServiceTicket with its own key, obtains the SessionKey to communicate with the Producer, and then uses SessionKey verify Producer identity, through the establishment of the connection, or refused to connect. ZooKeeper needs Kerberos authentication mode enabled to ensure that the Broker or Consumer connection is secure. Topic ACLs are stored in ZooKeeper. The storage node path is / acl // and the node data is a collection of R (ead), W (rite), C (reate), and D (elete) / acl / transaction / jack The data of the node is RW, then the user jack can read and write the topic of transaction. In addition, kafka is a privileged user, only kafka users can give / cancel permissions. Therefore, ACL-related ZooKeeper node permissions have all rights for kafka, and other users do not have any rights.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.