Summary of Kafka operation and maintenance

Source: Internet
Author: User

1. The problem of excessive log volume

After the Kafka has been running for a while, it will find that its host disk usage is growing slowly, looking at the amount of data logs held or the thresholds set previously.

This is actually the Kafka own log print burst disk.
The default is ~/kafka_2.11-0.9.0.0/config/log4j.properties as follows:

log4j.rootlogger=INFO, stdout log4j.appender.stdout=  Org.apache.log4j.ConsoleAppenderlog4j.appender.stdout.layout=  Org.apache.log4j.PatternLayoutlog4j.appender.stdout.layout.ConversionPattern=[%d]%p%m (%c)%  Nlog4j.appender.kafkaAppender=  Org.apache.log4j.DailyRollingFileAppenderlog4j.appender.kafkaAppender.DatePattern= '. ' yyyy-mm-dd-HHlog4j.appender.kafkaAppender.File=${kafka.logs.dir}/  Server.loglog4j.appender.kafkaAppender.layout=  Org.apache.log4j.PatternLayoutlog4j.appender.kafkaAppender.layout.ConversionPattern=[%d]%p%m (%c)%n .........

you can see that its own log is in accordance with the hours to back up, and does not automatically clear the function, so its own log has not been cleared, it may affect the amount of data estimates and judgments. this timewe just want to keep the logs for the last n days.. LOG4J does not configure such a function, in the case of no change in the source code, there are two ways to achieve the goal.

1. Write a crontab script to automatically clear;

2. Modify the Log4j.properties, and automatically clear according to the size.

1Log4j.rootlogger=INFO, stdout2 3log4j.appender.stdout=Org.apache.log4j.ConsoleAppender4log4j.appender.stdout.layout=Org.apache.log4j.PatternLayout5LOG4J.APPENDER.STDOUT.LAYOUT.CONVERSIONPATTERN=[%D]%p%m (%c)%N6 7Log4j.appender.kafkaappender=Org.apache.log4j.RollingFileAppender8Log4j.appender.kafkaappender.append=true9log4j.appender.kafkaappender.maxbackupindex=2TenLog4j.appender.kafkaappender.maxfilesize=5MB Onelog4j.appender.kafkaappender.file=${kafka.logs.dir}/Server.log Alog4j.appender.kafkaappender.layout=Org.apache.log4j.PatternLayout -LOG4J.APPENDER.KAFKAAPPENDER.LAYOUT.CONVERSIONPATTERN=[%D]%p%m (%c)%N - ... the ... -...

As above, I myself in the production environment set is the log backup two, 5MB began to roll back.

2.log.retention.bytes is the partition level.the explanation of the official website is: The maximum size of the log before deleting it, it is not clear, in fact, this value is only partition log size , not topic log size. 3.log.segment.delete.delay.ms Settings

The amount of time to wait before deleting a file from the filesystem.

This value by default is: 60000ms, that is, the amount of data to reach the set threshold, or retain the data, wait for a period of time before the deletion from the file system, so when doing performance testing, if the data is sent at a high rate, it will cause the monitoring Data folder when it is always out of threshold to delete, You can set this threshold to a smaller number.

Summary of Kafka operation and maintenance

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.