Kafka provides a number of configuration parameters for Broker,producer and consumer. Understanding and understanding these configuration parameters is very important for us to use Kafka.This article lists some of the important configuration parameters.The Official document configu
1. Download and unzip the binaries and modifybroker.id 的值wget http://apache.fayea.com/kafka/0.10.0.0/kafka_2.10-0.10.0.0.tgztar -xzf Kafka_2. Ten-0.10. 0.0 . TGZCD kafka_2. Ten-0.10. 0.0 # Modifythe broker in config/server.properties. ID 1, which is the broker. id=12. Turn on zookeeper server and Kafka serverBin/zookeeper-server-start. sh config/zookeeper.propertiesbin/
First, Kafka installation (Kafka_2.9.2-0.8.1.1.zip)
1. Download and unzip the installation package
TAR-XVF kafka_2.9.2-0.8.1.1.tgz or Unzip Kafka_2.9.2-0.8.1.1.zip
2. Modify the configuration file Conf/server.properties:
Broker.id=0;Host.name=xxx.xxx.xxx.xxxzookeeper.connect= xxx.xxx.xxx.xxx can be separated by commas to configure multiple
3, modify the configuration
-topics.sh--create--topic kafkatopic--replication-factor 1--partitions 1--zookeeper localhost:2181 5) Start up the Kafka producers:Ademacbook-pro:bin apple$ sh kafka-console-producer.sh--broker-list localhost:9092--sync--topic kafkatopicNote: To be hung in the background use:SH kafka-console-producer.sh--broker-list localhost:9092--sync--topic kafkatopic 6) Open
The properties that must be configured by default for configuration files server.properties in each Kafka broker are as follows:
[Java] view plain copy broker.id=0 num.network.threads=2 num.io.threads=8 socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576 socket.request.max.bytes=104857600 log.dirs=/tmp/kafka-logs num.partitions=2 log.retention.ho
#the name of sourceAgent.sources =Kafkasource#the name of channels, which is suggested to be named according to typeAgent.channels =Memorychannel#Sink's name, suggested to be named according to the targetAgent.sinks =Hdfssink#Specifies the channel name used by SourceAgent.sources.kafkaSource.channels =Memorychannel#Specify the name of the channel that sink needs to use, Note that this is the channelAgent.sinks.hdfsSink.channel =Memorychannel#--------kafkasource related
) Conf directory, then copy zoo_sample.cfg to zoo.cfg4) Modify the Datadir=d:\zookeeper-3.3.6\zookeeper-3.3.6\data in Zoo.cfg(according to the decompression path to adjust accordingly)3. Start Zookeeper go to the bin directory and execute Zkserver.cmdOpen command Window in Bin directory: SHIFT + right mouse buttonInput: Zkserver.cmd Enter execution4 Kafka ConfigurationCompressed Package Decompression: D:\kafka_2.11-0.11.0.1Go to config directory, edit
In the Kafka optimization process, constantly adjust the parameters in the configuration file, but sometimes encounter java.lang.NumberFormatException such errors For example socket.receive.buffer.bytes, socket.send.buffer.bytes such parameters, if you want to set to 5G, it is difficult to report the above error, because 5G conversion into a byte 5368709120 This number has exceeded the maximum number of
apache-flume1.6 Sink Default Support Kafka
[FLUME-2242]-FLUME Sink and Source for Apache Kafka
The official example is very intimate, you can directly run =,=, detailed configuration after a slow look.
A1.channels = Channel1A1.sources = src-1A1.sinks = K1
A1.sources.src-1.type = SpooldirA1.sources.src-1.channels = Channel1A1.sourc
log.retention.bytes.#log.retention.bytes=1073741824#The maximum size of a log segment file. When the this size is reached a new log segment would be created.log.segment.bytes=1073741824#The interval at which log segments is checked to see if they can be deleted according#To the retention policieslog.retention.check.interval.ms=300000#By default the log cleaner is disabled and the log retention policy would default to just delete segments after their Retention expires.#If Log.cleaner.enable=true
#kafka数据的存放地址, multiple addresses are separated by commas.Log.dirs=/tmp/kafka-logs#broker Server service Portport=9092#这个参数会在日志segment没有达到log the size of the. Segment.bytes setting also forces a new segment to be overwritten by the specified parameters when topic is createdLog.roll.hours=24#是否允许控制器关闭broker, if set to true, all leader on this broker will be closed and transferred to the other brokerControlle
The broker's configuration file is located in the Kafka config/server.properties file.Broker Basic ConfigurationBroker.id: Proxy ID, must be a unique integer. It can be a custom number such as 0,1,2,3, or it can be the last of the IP address, such as 23,24,25, it is recommended to use the following encoding.Auto.leader.rebalance.enable: If the leader node is allowed to be automatically assigned, if enabled,
Bootstrap.servers:broker server cluster list, formatted as HOST1:PORT1, Host2:port2Key.deserializer: Defining the Serialized interfaceValue.deserializer: Classes that implement the serialization interfaceGroup.id: consumer group IDconsumer.timeout.ms: Consumer connection time-out, default is 5000 millisecondsZookeeper.connect:Zookeeper server address, formatted as HOST1:PORT1, Host2:port2Zookeeper.connection.timeout.ms:Zookeeper server time-out time, default is 6000 millisecondsThis article is f
affect the consumers messageCondition broker.id=0 ############################# Socket Server Settings ############################# listeners=plaintext ://:9092 # The port the socket server listens on #port =9092 # Hostname The broker would bind to. If not set, the server would bind to all interfaces #host. Name=master # Hostname The broker would advertise to producers an D consumers. If not set, it uses the # value for "Host.name" if configured.
Otherwise, it'll use the value returned from #
installation directory, as follows: Note that Git bash cannot be used here because GIT will report a syntax error when it executes the bat file. We switch to window cmd command line. 3.1 Modifying Zookeeper and Kafka configuration files1 Modify the Server.properties file in config directory, modify the Log.dirs=/d/sam.lin/software/kafka/kafka_2.9.1-0.8.2.1/
for storage to improve parallel processing power. Replication: Copy. A partition consists of one copy or multiple replicas. Replicas are used for partitioned backups. 4. Installation Steps(1) Download the kafka_2.10-0.9.0.0.tgz package and put it in the/usr/local directoryTar zxvf kafka_2.10-0.9.0.0.tgzLN-SV kafka_2.10-0.9.0.0 Kafka(2) Configure the Java Runtime Environment, Kafka boot needs zookeeper, and
Kafka ---- kafka API (java version), kafka ---- kafkaapi
Apache Kafka contains new Java clients that will replace existing Scala clients, but they will remain for a while for compatibility. You can call these clients through some separate jar packages. These packages have little dependencies, and the old Scala client w
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.