Kafka Source Code Analysis log

Source: Internet
Author: User

This parses the source code of the log object itself.

The log class is the base class for a topic partition. All basic management actions for a topic partition. All in this object. The class source code file is Log.scala. In the source code log directory.

The log class is a collection of logsegment and manages encapsulation. First look at the initialization code.

classThe instantiation object of Log (Val dir:file,//log is already described in the Logmanager analysis. Here's a comparison. @volatilevar config:logconfig, @volatilevar Recoverypoint:long = 0L, Scheduler:scheduler, Time:time= SystemTime)extendsLogging with Kafkametricsgroup {Importkafka.log.log._//Here is the object loaded in the same file. The code is at the end of the file. /*A lock that guards all modifications to the log*/  PrivateVal lock =Newobject//Lock Objects/*Last time it was flushed*/  PrivateVal Lastflushedtime =NewAtomiclong (time.milliseconds)//Last log flush to disk time, this variable runs through the entire management process. /*The actual segments of the log*/
This object is a collection of all the shards under this topic. This collection implements the entire log management process. All actions are then dependent on this collection.PrivateVal Segments:concurrentnavigablemap[java.lang.long, logsegment] =NewConcurrentskiplistmap[java.lang.long, Logsegment] loadsegments ()// The topic all shards are loaded into the segments collection. And do some topic shard file inspection work./*Calculate The offset of the next message*/ @volatilevar nextoffsetmetadata =NewLogoffsetmetadata (Activesegment.nextoffset (), Activesegment.baseoffset, activeSegment.size.toInt)// Activesegment represents the current last shard. Because shards are distributed by size. The biggest is the newest. This is the active shard. This generates the next Offsetmetadata Val topicandpartition: Topicandpartition=log.parsetopicpartitionname (name)//Get topic name and partition. Info ("Completed load of log%s with log end offset%d". Format (name, Logendoffset)) Val tags= Map ("topic", Topicandpartition.topic, "Partition",topicAndPartition.partition.toString)//The mapping label for the monitoring metric.
Here are some of the controls done through metrics. Newgauge ("Numlogsegments", NewGauge[int] {def value=numberofsegments}, tags) newgauge ("Logstartoffset", NewGauge[long] {def value=Logstartoffset}, tags) newgauge ("Logendoffset", NewGauge[long] {def value=Logendoffset}, tags) newgauge ("Size", NewGauge[long] {def value=size}, tags)/**The name of this log*/def name= Dir.getname ()

Above is the section of the log class initialization. The most important part of this section is declaring several objects that pass through the entire process, and loading the Shard file into the memory object.

Here's a look at the main load function loadsegments.

Privatedef loadsegments () {//Create the log directory if it doesn ' t existdir.mkdirs ()//Here is the creation of the topic directory. The comments themselves also illustrate this. //First do a pass through the files in the log directory and remove any temporary files//and complete any interrupted swap operations     for(File <-dir.listfilesiffile.isfile) {//This for loop is used to check if the Shard is to be purged or deleted. if(!file.canread)Throw NewIOException ("Could not read file" +file) Val filename=File.getnameif(Filename.endswith (deletedfilesuffix) | |Filename.endswith (Cleanedfilesuffix)) {        //If the file ends in. deleted or. Cleaned, delete itfile.delete ()}Else if(Filename.endswith (Swapfilesuffix)) {//Check here if there is a swap file present. Delete or rename swap files depending on the situation. //we crashed in the middle of a swaps operation, to recover://if a log, swap it in and delete the. index file//If an index just delete it, it 'll be rebuiltVal BaseName =NewFile (Utils.replacesuffix (File.getpath, Swapfilesuffix, "" "))        if(BaseName.getPath.endsWith (Indexfilesuffix)) {file.delete ()}Else if(BaseName.getPath.endsWith (Logfilesuffix)) {//Delete the indexVal index =NewFile (Utils.replacesuffix (Basename.getpath, Logfilesuffix, Indexfilesuffix)) Index.delete ()//Complete the swap operationVal renamed =File.renameto (baseName)if(renamed) info ("Found log file%s from interrupted swap operation, repairing.". Format (file.getpath))Else            Throw NewKafkaexception ("Failed to rename file%s."). Format (File.getpath)} }    }    //Now does a second pass and load all the. log and. index files     for(File <-dir.listfilesiffile.isfile) {//This for loop is loaded and checked for the existence of a log shard. Val filename=File.getnameif(Filename.endswith (Indexfilesuffix)) {//if it is an index file, make sure it has a corresponding. log fileVal LogFile =NewFile (File.getAbsolutePath.replace (Indexfilesuffix, logfilesuffix))if(!logfile.exists) {//Here is if only the index file does not have a corresponding log file. Remove the index file. Warn ("Found an orphaned index file,%s, with no corresponding log file.". Format (File.getabsolutepath)) File.delete ()}}Else if(Filename.endswith (Logfilesuffix)) {//here is where the Logsegment object is created. //if its a log file, load the corresponding log segmentVal start = filename.substring (0, Filename.length-logfilesuffix.length). Tolong Val Hasindex=Log.indexfilename (dir, start). exists//confirm that the corresponding index file exists. Val Segment=NewLogsegment (dir =dir, Startoffset=Start, Indexintervalbytes=Config.indexinterval, Maxindexsize=config.maxindexsize, Rolljitterms=Config.randomsegmentjitter, time=Time )if(!Hasindex) {Error ("Could not the Find index file corresponding to log file%s, rebuilding index ...". Format (Segment.log.file.getAbsolutePath)) Segment.recover (config.maxmessagesize)//The index of the corresponding log file does not exist, the Rec        Over. This place is usually met Kafka index error needs to be re-established when the administrator deleted the corresponding index will cause the action.      } segments.put (Start, segment)//Add Segment object to the total set. }    }    if(Logsegments.size = = 0{//here to determine if a new topic partition is present. A shard file does not exist yet. So create an empty Shard file object. //no existing segments, create a new mutable segment beginning at offset 0Segments.put (0L,NewLogsegment (dir =dir, Startoffset= 0, Indexintervalbytes=Config.indexinterval, Maxindexsize=config.maxindexsize, Rolljitterms=Config.randomsegmentjitter, time=Time )) } Else{Recoverlog ()//This is topic shard is not empty. Sets the new offset value for the checkpoint. //Reset the index size of the currently active log segment to allow more entriesactiveSegment.index.resize (config.maxindexsize)}//Sanity Check the index file of every segment to ensure we don ' t proceed with a corrupt segment     for(S <-logsegments) S.index.sanitycheck ()//index file check. }

See what Recoverlog is doing.

Privatedef recoverlog () {//If we have the shutdown marker, skip recovery    if(hascleanshutdownfile) {//See if any cleanshutdownfile exist. The Hascleanshutdownfile function is to determine that the file does not exist. This. RecoveryPoint =Activesegment.nextoffset//presence directly sets the recovery checkpoint to the latest offset value of the last Shardreturn    }    //okay we need to actually recovery this logVal unflushed = logsegments ( This. RecoveryPoint, Long.maxvalue). Iterator//This is to get the checkpoint to the maximum if there are other shards. That is, checking whether the checkpoint is the last shard file.  while(Unflushed.hasnext) {//If it is not the last shard. Get this shard. The Recover function of this object is then called to delete the Shard if the function returns an error. Val Curr=Unflushed.next Info ("Recovering unflushed segment%d in log%s.". Format (curr.baseoffset, name)) Val truncatedbytes=Try{curr.recover (config.maxmessagesize)}Catch {           CaseE:invalidoffsetexception =Val Startoffset=Curr.baseoffset warn ("Found Invalid offset during recovery for log" + Dir.getname + ". Deleting the corrupt segment and "+" creating an empty one with starting offset "+startoffset) Curr.truncateto (startoffset)}if(Truncatedbytes > 0) {        //we had an invalid message, delete all remaining logWarn ("Corruption found in segment%d of log%s, truncating to offset%d."). Format (curr.baseoffset, name, Curr.nextoffset)) Unflushed.foreach (deletesegment) }}}

The processing action of this function. The wrapper is logsegment to the object of the same name. The analysis of Logsegment will continue to be analyzed in subsequent sections.

Kafka Source Code Analysis log

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.