Introduction to Kafka and installation and testing of PHP-based Kafka

Source: Internet
Author: User
Tags sprintf zookeeper
This article to share the content is about Kafka introduction and PHP-based Kafka installation and testing, the content is very detailed, the need for friends can refer to, hope can help you.

Brief introduction

Kafka is a high-throughput distributed publishing and subscription messaging system

Kafka role must be known

Producer: Producer.
Consumer: consumers.
Topic: Messages are topic as Category records, Kafka the Message Seed (Feed), and each class of messages is called a topic (topic).
Broker: Run as a cluster, can consist of one or more services, each service is called a broker; consumers can subscribe to one or more topics (topic) and pull data from the broker to consume these published messages.

Classic model

1. A topic under the partition can not be less than the number of consumers, that is, a theme under the number of consumers can not be greater than the partition genus, big waste of idle
2. A partition under a topic can be consumed by one of the different consumer groups at the same time
3. A partition under a topic can only be consumed by one consumer of the same consumer group

Common parameter Description

Request.required.acks

The ACK of the Kafka producer has a mechanism of 3, and Producerconfig when initializing producer can be implemented by configuring request.required.acks different values.

0: This means that the producer producer does not wait for confirmation from broker synchronization to continue sending the next (batch) message. This option provides the lowest latency but the weakest durability guarantee (some data is lost when the server fails, such as leader is dead, but producer does not know, and the information sent to broker is not received).

1: This means that producer sends the next message after leader has successfully received the data and has been confirmed. This option provides better durability for customers waiting for the server to confirm that the request was successful (written to death leader but not yet replicated will lose the unique message).

-1: This means that producer is not counted once sent until the follower copy confirms the receipt of the data.
This option provides the best durability and we guarantee that no information will be lost as long as at least one synchronous copy remains alive.

Three kinds of mechanism, performance descending (producer throughput decrease), data robustness is incremented in turn.

Auto.offset.reset

1. Earliest: Automatically resets the offset to the earliest offset
2. Latest: Automatically resets the offset to the latest offset (default)
3. None: Throws an exception to consumer if the consumer group does not find a previous offset.
4. Other parameters: Throw an exception to consumer (invalid parameter)

Kafka Installation and simple testing

Install Kafka (no need to install, unpack)

# Official Download Address: http://kafka.apache.org/downloads# wget Https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.1/kafka_ 2.12-1.1.1.TGZTAR-XZF KAFKA_2.12-1.1.1.TGZCD kafka_2.12-1.1.0

Start Kafka Server

# need to start zookeeperbin/zookeeper-server-start.sh config/zookeeper.propertiesbin/kafka-server-start.sh config/ Server.properties

Start Kafka Client Test

# Create a topic, test topic 2 partitions bin/kafka-topics.sh--create--zookeeper localhost:2181--replication-factor 1--partitions 2-- Topic testcreated topic "Test". # Show all topics bin/kafka-topics.sh--list--zookeeper localhost:2181test# Show topic information bin/ kafka-topics.sh--describe--zookeeper localhost:2181--topic testtopic:test    partitioncount:2    Replicationfactor:1    configs:    topic:test    partition:0    leader:0    replicas:0 isr:0 Topic:test    partition:1    leader:0    replicas:0    isr:0# start a producer (input message) bin/kafka-console-producer.sh --broker-list localhost:9092--topic test[wait for your content to appear > input to]>i am a new msg!>i am a good msg? # Start a producer (waiting for a message) # Note that the--from-beginning here, each time will be read from the beginning, you can try to remove and do not remove the effect bin/kafka-console-consumer.sh--bootstrap-server localhost:9092- -topic test--from-beginning[wait for message]i am a new msg!i am a good msg?

Installing PHP extensions for Kafka

git clone https://github.com/arnaud-lb/php-rdkafka.gitcd php-rdkafkaphpize./configuremake all-j 5sudo make Installvim [php]/php.iniextension=rdkafka.so

PHP Code Practice

Producers

<?php$conf = new rdkafka\conf (); $conf->SETDRMSGCB (function ($kafka, $message) {    file_put_contents ("./dr_ Cb.log ", Var_export ($message, True). Php_eol, file_append);}); $conf->SETERRORCB (function ($kafka, $err, $reason) {    file_put_contents ("./err_cb.log", sprintf ("Kafka Error:% S (reason:%s) ", Rd_kafka_err2str ($err), $reason). Php_eol, file_append);}); $rk = new Rdkafka\producer ($conf); $rk->setloglevel (Log_debug); $rk->addbrokers ("127.0.0.1"); $cf = new Rdkafka\ Topicconf (); $cf->set (' Request.required.acks ', 0); $topic = $rk->newtopic ("test", $cf); $option = ' qkl '; for ($i = 0; $i < 20; $i + +) {    //rd_kafka_partition_ua automatically selects    the partition//$option optional    $topic->produce (rd_kafka_partition_ua, 0, "Qkl. $i ", $option);} $len = $rk->getoutqlen (), while ($len > 0) {    $len = $rk->getoutqlen ();    Var_dump ($len);    $rk->poll (50);}

Run producer

PHP producer.php# outputint (+) int (+) int (0) # You can see that the consumer shell you just launched above should output the message qkl. 0qkl. 1qkl. 2qkl. 3qkl. 4qkl. 5qkl. 6qkl. 7qkl. 8qkl. 9qkl. 10qkl. 11qkl. 12qkl. 13qkl. 14qkl. 15qkl. 16qkl. 17qkl. 18qkl. 19

Consumers

<?php$conf = new rdkafka\conf (); $conf-&GT;SETDRMSGCB (function ($kafka, $message) {file_put_contents ("./c_dr_ Cb.log ", Var_export ($message, True), file_append);}); $conf-&GT;SETERRORCB (function ($kafka, $err, $reason) {file_put_contents ("./err_cb.log", sprintf ("Kafka Error:%s (rea Son:%s) ", Rd_kafka_err2str ($err), $reason). Php_eol, file_append);}); /Set consumer group $conf->set (' Group.id ', ' myconsumergroup '); $rk = new Rdkafka\consumer ($conf); $rk->addbrokers ("127.0.0.1 $topicConf = new rdkafka\topicconf (), $topicConf->set (' Request.required.acks ', 1);//automatically submit confirmation within interval.ms time, It is not recommended to start//$topicConf->set (' auto.commit.enable ', 1); $topicConf->set (' auto.commit.enable ', 0); $topicConf Set (' auto.commit.interval.ms ', 100);//settings offset is stored as file//$topicConf->set (' Offset.store.method ', ' file ');// Set the storage for offset as broker $topicConf->set (' Offset.store.method ', ' Broker ');//$topicConf->set (' Offset.store.path ', __DIR__);//smallest: A simple understanding of consumption from the beginning, is actually equivalent to the above earliest//largest: simple to understand that from the latest start of consumption, actually equivalent to the aboveThe latest//$topicConf->set (' auto.offset.reset ', ' smallest '); $topic = $rk->newtopic ("test", $topicConf);// Parameter 1 consumption partition 0//rd_kafka_offset_beginning start spending//rd_kafka_offset_stored last consumption of OFFSET record start consumption//Rd_kafka_offset_end Last consumption $topic->consumestart (0, rd_kafka_offset_beginning);//$topic->consumestart (0, Rd_kafka_offset_end);  $topic->consumestart (0, rd_kafka_offset_stored); while (true) {///parameter 1 represents the consumption partition, this is the partition 0//Parameter 2 indicates how long the synchronization is blocked $message =    $topic->consume (0, 12 * 1000);            Switch ($message->err) {case rd_kafka_resp_err_no_error:var_dump ($message);        Break Case Rd_kafka_resp_err__partition_eof:echo "No more messages;            Would wait for more\n ";        Break            Case Rd_kafka_resp_err__timed_out:echo "TIMED out\n";        Break            Default:throw New \exception ($message->errstr (), $message->err);    Break }}

View server meta data (topic/partition/broker)

<?php$conf = new rdkafka\conf (); $conf-&GT;SETDRMSGCB (function ($kafka, $message) {file_put_contents ("./xx.log", VA R_export ($message, True), file_append);}); $conf-&GT;SETERRORCB (function ($kafka, $err, $reason) {printf ("Kafka Error:%s (reason:%s) \ n", Rd_kafka_err2str ($err) , $reason);}); $conf->set (' group.id ', ' myconsumergroup '); $rk = new Rdkafka\consumer ($conf); $rk->addbrokers ("127.0.0.1"); $ Allinfo = $rk->metadata (true, NULL, 60e3), $topics = $allInfo->gettopics (); Echo rd_kafka_offset_tail (+); echo "-    -"; Echo count ($topics); echo"-"; foreach ($topics as $topic) {$topicName = $topic->gettopic ();    if ($topicName = = "__consumer_offsets") {continue;    } $partitions = $topic->getpartitions (); foreach ($partitions as $partition) {//$RF = new Reflectionclass (Get_class ($partition));//foreach ($rf-&gt        ; GetMethods () as $f) {//Var_dump ($f);//}//die (); $topPartition = new Rdkafka\topicpartition ($topIcname, $partition->getid ()); echo "The current topic:". ($topPartition->gettopic ()). " - " . $partition->getid ().        " - "; echo "Offset:". ($topPartition->getoffset ()).    Php_eol; }}
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.