Original address: https://www.cnblogs.com/lilixin/p/5775877.html
Kafka installation and use
Download Address: https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.1.1/kafka_2.10-0.8.1.1.tgz Installation and startup Kafka Step 1: Install Kafka
$ TAR-XZF kafka_2.10-0.8.1.1.tgz
Step 2: Configure Server.properties
Configure zookeeper (assuming you have installed zookeeper, if not installed, please search the installation method on the Internet)
Enter Kafka installation Engineering root directory Edit
Vim Config/server.properties
Modify Properties zookeeper.connect=ip:2181,ip2:2181
Step 3:server.properties Configuration Instructions
Kafka most important three configurations are: Broker.id, Log.dir, Zookeeper.connect
Kafka server-side config/server.properties parameter descriptions and explanations are as follows:
(Reference configuration Note address: http://blog.csdn.net/lizhitao/article/details/25667831)
#实际使用案例 here 211 The Kafka configuration file above
broker.id=1 port=9092 host.name=192.168.1.211 num.network.threads=2 num.io.threads=8 socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576 socket.request.max.bytes=104857600 log.dirs=
/tmp/kafka-logs num.partitions=2 log.retention.hours=168 log.segment.bytes=536870912 log.retention.check.interval.ms=60000 Log.cleaner.enable=false zookeeper.connect= 192.168.1.213:2181,192.168.1.216:2181,192.168.1.217:2181 zookeeper.connection.timeout.ms=1000000 #kafka实际使用案例
210 Server Kafka Configuration broker.id=2 port=9092 host.name=192.168.1.210 num.network.threads=2 num.io.threads=8 socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576 socket.request.max.bytes=104857600 log.dirs=
/tmp/kafka-logs num.partitions=2 log.retention.hours=168 log.segment.bytes=536870912 log.retention.check.interval.ms=60000 Log.cleaner.enable=false zookeeper.connect= 192.168.1.213:2181,192.168.1.216:2181,192.168.1.217:2181 zookeeper.connection.timeout.ms=1000000
Step 4: Start Kafka (Start zookeeper $: bin/zkserver.sh start Config/zookeeper.properties &)
CD kafka-0.8.1
$ Bin/kafka-server-start.sh-daemon Config/server.properties &
(When experimenting, you need to start at least two broker Bin/kafka-server-start.sh-daemon Config/server-1.properties &) Step 5: Create topic
$ bin/kafka-topics.sh--create--zookeeper localhost:2181--replication-factor 1--partitions 1--topic test
Step 6: Verify that topic is created successfully
$ bin/kafka-topics.sh--list--zookeeper localhost:2181
localhost for zookeeper address
Topic Description:
bin/kafka-topics.sh--describe--zookeeper 192.168.1.8:2181--topic test
Start the error unrecognized VM option ' +usecompressedoops '
view bin/kafka-run-class.sh
Find
if [-z] $KAFKA _jvm_ Performance_opts "]; Then
kafka_jvm_performance_opts= "-server -XX:+USECOMPRESSEDOOPS-XX:+USEPARNEWGC-XX:+USECONCMARKSWEEPGC -xx:+cmsclassunloadingenabled-xx:+cmsscavengebeforeremark-xx:+disableexplicitgc-djava.awt.headless=true "
Fi, get
rid of-xx:+usecompressedoops.
Start an error could not reserve enough spaces for object heap
reason and resolution:
View kafka-server-start.sh configuration file and find heap settings information: Kafka_ heap_opts= "-xmx1g-xms1g" changes the memory here is 256 (because the test machine memory is 1G in total, so the error)
Step 7: Send a message
Send some message authentication, in console mode, start producer
$ bin/kafka-console-producer.sh--broker-list localhost:9092--topic test
(here localhost change to native IP, otherwise error, I don ' t know why)
Message case:
Step 8: Start a consumer
$ bin/kafka-console-consumer.sh--zookeeper localhost:2181--topic test--from-beginning
Delete topic, use caution, only delete metadata in zookeeper, message file must be deleted manually
bin/kafka-run-class.sh kafka.admin.DeleteTopicCommand--topic test--zookeeper 192.168.197.170:2181, 192.168.197.171:2181