(b) Kafka0.8.2 official documents Chinese version-api

Source: Internet
Author: User
Tags zookeeper

2. API

We are rewriting the JVM client for Kafka. In Kafka0.8.2, a newly rewritten Java producer is included. The next version will contain an equivalent Java consumer. These new clients are intended to replace existing Scala clients, but for compatibility they will coexist for a period of time. These clients can be used in a separate jar with minimal dependencies, while the old Scala client is still packaged with the server.

2.1 Producer API

In the kafka0.8.2 version, we encourage you to use the new Java producer. This client is tested in a production environment, which is faster and has more features than the previous Scala client. You can use it by adding the following maven dependencies:

<dependency>    <groupId>org.apache.kafka</groupId>    <artifactId>kafka-clients< /artifactid>    <version>0.8.2.0</version></dependency>

You can see how to use producer in Javadoc.

For those who are interested in legacy Scala producer APIs, you can find relevant information here.

2.2 High Level Consumer API

classConsumer {/*** Create a consumerconnector creates a consumer connector * *@paramconfig at the minimum, need to specify the groupid of the consumer and the zookeeper * connection String Zookeeper.connect.
* Parameter interpretation: Based on a minimal configuration, you only need to specify the consumer group, zookeeper connection*/ Public Statickafka.javaapi.consumer.ConsumerConnector createjavaconsumerconnector (consumerconfig config);}/*** V:type of the message type * K:type of the optional key assciated with the message related optional configuration*/ Public InterfaceKafka.javaapi.consumer.ConsumerConnector {/*** Create A list of message streams of type T for each topic. * Create a message flow list of type T for each topic
* * @paramTopiccountmap A map of (topic, #streams) pair *@paramdecoder A decoder that converts from Message to T *@returna map of (topic, List of kafkastream) pairs. * The number of items in the list is #streams. Each stream supports * a iterator over message/metadata pairs. */ Public<K,V> map<string, list<kafkastream<k,v>>>Createmessagestreams (Map<string, integer> Topiccountmap, decoder<k> Keydecoder, decoder<v>Valuedecoder); /*** Create A list of message streams of type T for each topic, using the default decoder.
* Create a message flow list of type T for each topic, using the default decoder*/ PublicMap<string, list<kafkastream<byte[],byte[]>>> createmessagestreams (map<string, integer>Topiccountmap); /*** Create A list of message streams for topics matching a wildcard. * * @paramTopicfilter A topicfilter that specifies which topics to * subscribe to (encapsulates a whitelis T or a blacklist). * @paramNumstreams the number of message streams to return. * @paramKeydecoder A decoder that decodes the message key *@paramValuedecoder A decoder that decodes the message itself *@returna list of kafkastream. Each stream supports a * iterator over its messageandmetadata elements. */ Public<K,V> list<kafkastream<k,v>>Createmessagestreamsbyfilter (Topicfilter topicfilter,intNumstreams, decoder<k> Keydecoder, decoder<v>Valuedecoder); /*** Create A list of message streams for topics matching a wildcard, using the default decoder.
* Create a list of message flows for message flows that match wildcard characters, using the default decoder*/ Publiclist<kafkastream<byte[],byte[]>> Createmessagestreamsbyfilter (Topicfilter topicfilter,intnumstreams); /*** Create A list of message streams for topics matching a wildcard, using the default decoder, with one stream.
* Create a list of message flows for message flows that match wildcard characters, using the default decoder*/ Publiclist<kafkastream<byte[],byte[]>>Createmessagestreamsbyfilter (Topicfilter topicfilter); /*** Commit The offsets of all topic/partitions connected by this connector.
* Commit all topic and partition offsets associated with this connector*/ Public voidcommitoffsets (); /*** Shut down the connector off connector*/ Public voidshutdown ();}

You can refer to this example to learn how to use the High level consumer API.

2.3 Simple Consumer API

classKafka.javaapi.consumer.SimpleConsumer {/*** Fetch a set of messages from a topic.
* Pull messages from a topic * *@paramrequest Specifies the topic name, topic partition, starting byte offset, maximum bytes to be fetched.
* The request needs to specify the name of the subject, the subject partition, the starting offset, and the maximum number of bytes of the pull data *@returna set of fetched messages
* Pull back a collection of messages*/ Publicfetchresponse Fetch (kafka.javaapi.FetchRequest request); /*** Fetch metadata for a sequence of topics.
* Get metadata for a range of topics * *@paramrequest Specifies the versionId, clientId, sequence of topics. You need to specify a version number, client ID, subject *@returnMetadata for each topic in the request. Metadata for each request topic*/ Publickafka.javaapi.TopicMetadataResponse Send (kafka.javaapi.TopicMetadataRequest request); /*** Get A list of valid offsets (up to maxSize) before the given time.
* Get a valid offset list before the given time (the offset can be taken to the maximum value before the given time) * *@paramrequest a [[Kafka.javaapi.OffsetRequest]] object. * @returna [[Kafka.javaapi.OffsetResponse]] object. */ Publickafka.javaapi.OffsetResponse Getoffsetsbefore (offsetrequest request); /*** Close the Simpleconsumer. Close Simpleconsumer*/ Public voidclose ();}

For most applications, the High level API is fully sufficient. Some applications want to use some features that are not exposed by the High level API (for example, when restarting a consumer, specifying the initial offset, or offset). They can use our low level.

Simpleconsumer API. But this logic is a bit more complicated, you can refer to this example.

2.4 Kafka Hadoop Consumer API

One of our basic use cases is to provide a horizontally scalable solution for data aggregation and loading data to Hadoop. To support this user use case, we provide a Hadoop-based consumer, which generates many map tasks to pull data from the Kafka cluster in parallel. This can be very quick to load Kafka data into Hadoop (we use only a few Kafka servers to fully saturate the network, meaning that the Hadoop-based consumer pull fast).

Information on using Hadoop consumer can be found here.

(b) Kafka0.8.2 official documents Chinese version-api

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.