Kafka Source Processing request __ Source code

Source: Internet
Author: User
Tags throwable

One Kafkarequesthandlerpool two Kafkaapishandle 1 Apikeys enum class Three request data structure 1 RequestID 2 Header 3 body

The entrance in the Kafkaserver is:

APIs = new Kafkaapis (Socketserver.requestchannel, Replicamanager, Groupcoordinator,
        Kafkacontroller, ZkUtils, Config.brokerid, config, Metadatacache, metrics, authorizer)
requesthandlerpool = new Kafkarequesthandlerpool ( Config.brokerid, Socketserver.requestchannel, APIs, config.numiothreads)

First, the Kafkaapis is instantiated based on the relevant parameters, and then the Kafkarequesthandlerpool is instantiated. Let's look at the Kafkarequesthandlerpool first. First, Kafkarequesthandlerpool

Class Kafkarequesthandlerpool (Val brokerid:int,
                              Val requestchannel:requestchannel,
                              Val Apis:kafkaapis,
                              numthreads:int) extends Logging with Kafkametricsgroup {

  * * a meter to track the average free capacity of the RE Quest Handlers * *
  private val aggregateidlemeter = Newmeter ("Requesthandleravgidlepercent", "percent", timeunit.nanoseconds)

  this.logident = "[Kafka Request Handler on Broker" + Brokerid + "],"
  val threads = new ARR Ay[thread] (numthreads)
  val runnables = new Array[kafkarequesthandler] (numthreads) for
  (i <-0 until Numthreads) {
    runnables (i) = new Kafkarequesthandler (i, Brokerid, Aggregateidlemeter, Numthreads, Requestchannel, APIs)
    threads (i) = Utils.daemonthread ("kafka-request-handler-" + I, runnables (i))
    threads (i). Start ()
  }
//...
}

The main is to start the numthreads number of threads, and then the content executed in the thread is Kafkarequesthandler.

/** * Response to Kafka requested thread/class Kafkarequesthandler (Id:int, Brokerid:int, Val Aggregateidlemeter:meter, Val Totalhandlerthreads:int, Val R Equestchannel:requestchannel, Apis:kafkaapis) extends Runnable with Logging {this.logident = "[Kafka Request Handler" + ID + "on Broker" + Brokerid + "]," def run () {while (true) {try {V  AR req:RequestChannel.Request = null while (req = = null) {//We use a single meter for aggregate idle
          Percentage for the thread pool. Since Meter is calculated as Total_recorded_value/time_window and//Time_window is independent to the numb
          Er of threads, each recorded idle//time should is discounted by # threads. Val Startselecttime = systemtime.nanoseconds req = requestchannel.receiverequest (+) Val idleTime = S YstemtIme.nanoseconds-startselecttime Aggregateidlemeter.mark (Idletime/totalhandlerthreads)} if ( Req eq Requestchannel.alldone} {debug ("Kafka request handler%d on broker%d received shut Down command". Format
          (ID, Brokerid)) return} Req.requestdequeuetimems = Systemtime.milliseconds trace ("Kafka request handler%d on Bro  Ker%d handling request%s ". Format (ID, Brokerid, req)) Apis.handle (req)//Here is how to handle the key of the request} catch {case
E:throwable => Error ("Exception When handling Request", E)}}//shutdown ... }

In the Run method, we can see that the main processing message is Api.handle (req). Below we mainly look at this piece of content. Second, Kafkaapis.handle

Look directly at the code:

/** * Top-level method this handles all requests and multiplexes to the right API/DEF handle (Request:requestchannel.
    Request {try {trace ("Handling request:%s from Connection%s;securityprotocol:%s,principal:%s").
      Format (Request.requestdesc (True), Request.connectionid, Request.securityprotocol, Request.session.principal)) Apikeys.forid (Request.requestid) match {//according to RequestID, call different methods to handle different request case Apikeys.produce => Handleproducerreq Uest (Request) Case Apikeys.fetch => handlefetchrequest (Request) Case Apikeys.list_offsets => Handleo Ffsetrequest (Request) Case Apikeys.metadata => handletopicmetadatarequest (request) Case Apikeys.leader_ 
        AND_ISR => handleleaderandisrrequest (Request) Case Apikeys.stop_replica => handlestopreplicarequest (Request) Case Apikeys.update_metadata_key => handleupdatemetadatarequest (request) Case APIKEYS.CONTROLLED_SHUTD Own_key => HandlecontrolledshutdownrequesT (request) Case Apikeys.offset_commit => handleoffsetcommitrequest (request) Case Apikeys.offset_fetch = 
        > handleoffsetfetchrequest (Request) Case Apikeys.group_coordinator => handlegroupcoordinatorrequest (Request) Case Apikeys.join_group => handlejoingrouprequest (Request) Case Apikeys.heartbeat => Handleheartbea Trequest (Request) Case Apikeys.leave_group => handleleavegrouprequest (request) Case Apikeys.sync_group
        => handlesyncgrouprequest (Request) Case apikeys.describe_groups => handledescribegrouprequest (Request) Case Apikeys.list_groups => handlelistgroupsrequest (Request) Case Apikeys.sasl_handshake => Handlesaslhan Dshakerequest (Request) Case Apikeys.api_versions => handleapiversionsrequest (Request) Case RequestID =& Gt throw new Kafkaexception ("Unknown API Code" + RequestID)} catch {case e:throwable => if ( Request.requestobj!= null) {Request.requestObj.handleError (E, Requestchannel, request) error ("Error when handling reques T%s '. Format (request.requestobj), E)} else {val response = Request.body.getErrorResponse (Request.heade R.apiversion, e) val respheader = new Responseheader (request.header.correlationId)/* If request does
             N ' t have a default error response, we just close the connection. For example, when produce request has ACKs set to 0 * */if (response = null) Requestchannel.closecon Nection (request.processor, request) Else Requestchannel.sendresponse (new Response (Request, new RESPO Nsesend (Request.connectionid, Respheader, Response)) error ("Error when handling request%s". Format (Request.bod y), E)}} finally Request.apilocalcompletetimems = Systemtime.milliseconds}
2.1 Apikeys Enumeration Class
Produce (0, "produce"),//producer Message
FETCH (1, "FETCH"),//Consumer get Message
list_offsets (2, "offsets"),//Get offset
METADATA (3, "Metadata"),//Get topic source data
LEADER_AND_ISR (4, "Leaderandisr"),
Stop_replica (5, "Stopreplica"),//Stop copy Copy
Update_metadata_key (6, "Updatemetadata"),//update source data
Controlled_shutdown_key (7, "Controlledshutdown"),//controller stop
offset_commit (8, "Offsetcommit"),//Submit OFFSET
Offset_fetch (9, "Offsetfetch"),//Get OFFSET
Group_coordinator ("Groupcoordinator"),//Group Coordination
Join_group (one, "Joingroup"),//join Group
HEARTBEAT (12, " Heartbeat "),//Heartbeat
Leave_group (" Leavegroup "),//Leave group
sync_group (" Syncgroup "),//sync group
Describe_ GROUPS ("describegroups"),//Description group
list_groups ("listgroups"),//list group
Sasl_handshake (17, " Saslhandshake "),//Encrypted handshake
api_versions (" apiversions ");/version

This piece is relatively simple, the main is the data structure of request, as well as follow-up processing methods. Let's step through the analysis below. Third, request data structure

All the requests will eventually become the requestchannel.request. So let's take a look at this request first.

Case Class Request (Processor:int, connectionid:string, session:session, private var buffer:bytebuffer, Starttimems:lo Ng, Securityprotocol:securityprotocol) {//... val RequestID = Buffer.getshort () Private Val Keytonameanddes
        Erializermap:map[short, (bytebuffer) => requestorresponse]= Map (ApiKeys.FETCH.id->, ApiKeys.CONTROLLED_SHUTDOWN_KEY.id-> controlledshutdownrequest.readfrom) val requestobj = Keyt
      Onameanddeserializermap.get (RequestID). Map (readfrom => readfrom (buffer). ornull val Header:requestheader =  if (requestobj = = null) {Buffer.rewind try Requestheader.parse (buffer) Catch {case EX: Throwable => throw new Invalidrequestexception (s) Error parsing request header.
      Our best guess of the Apikey is: $requestId ", ex)}} else null val body:abstractrequest = if (requestobj = null) try {//For unsupported version of Apiversionsrequest, create a dummy request to enable the error response to be returned later if (Header.apikey = = ApiKeys.API_VERSIONS.id &&!
            Protocol.apiversionsupported (Header.apikey, header.apiversion)) New apiversionsrequest Else Abstractrequest.getrequest (Header.apikey, header.apiversion, buffer)} catch {case ex:throwable =&
            Gt throw new Invalidrequestexception (S "Error getting Request for Apikey: ${header.apikey} and Apiversion: ${ Header.apiversion} ", ex)} else NULL buffer = NULL private val Requestlogger = Logger.getlo Gger ("Kafka.request.logger") def Requestdesc (details:boolean): String = {if (requestobj!= null) requ Estobj.describe (Details) Else header.tostring + "--" + body.tostring}//...}

There are several main parts,
-First is RequestID, which is a short type of value.
-Then the header, the message header, is a requestheader
-The last is body, is the content of the message, the type is abstractrequest 3.1 RequestID

This requestid represents the type of API that Kafkaapis need to use to determine which method is being invoked to process the message. 3.2 Header

Let's look at the structure of the requestheader.

Private final short apikey;
Private final short apiversion;
Private final String clientId;
private final int correlationid;

It is mainly four variables, Apikey,apiversion,clientid,correlationid. 3.3 Body

The message body, the corresponding class is abstractrequest. The main content is based on the version number and Apikey to resolve the details of the message.

public static abstractrequest getrequest (int requestid, int versionId, Bytebuffer buffer) {Apikeys Apikey = apikeys.f
    Orid (RequestID);
        Switch (apikey) {case Produce:return producerequest.parse (buffer, versionId);
        Case Fetch:return fetchrequest.parse (buffer, versionId);
        Case List_offsets:return listoffsetrequest.parse (buffer, versionId);
        Case Metadata:return metadatarequest.parse (buffer, versionId);
        Case Offset_commit:return offsetcommitrequest.parse (buffer, versionId);
        Case Offset_fetch:return offsetfetchrequest.parse (buffer, versionId);
        Case Group_coordinator:return groupcoordinatorrequest.parse (buffer, versionId);
        Case Join_group:return joingrouprequest.parse (buffer, versionId);
        Case Heartbeat:return heartbeatrequest.parse (buffer, versionId); Case Leave_group:return LeAvegrouprequest.parse (buffer, versionId);
        Case Sync_group:return syncgrouprequest.parse (buffer, versionId);
        Case Stop_replica:return stopreplicarequest.parse (buffer, versionId);
        Case Controlled_shutdown_key:return controlledshutdownrequest.parse (buffer, versionId);
        Case Update_metadata_key:return updatemetadatarequest.parse (buffer, versionId);
        Case Leader_and_isr:return leaderandisrrequest.parse (buffer, versionId);
        Case Describe_groups:return describegroupsrequest.parse (buffer, versionId);
        Case List_groups:return listgroupsrequest.parse (buffer, versionId);
        Case Sasl_handshake:return saslhandshakerequest.parse (buffer, versionId);
        Case Api_versions:return apiversionsrequest.parse (buffer, versionId); Default:throw New Assertionerror (String.Format ("Apikey%s is not currently handLed in ' Getrequest ', the "+" code should is updated to does so. ", Apikey)); }
}

This is a lot of request types, you want to understand the specific structure, you can go to each class to see specifically.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.