Log system-based flume collection of Docker container logs

Source: Internet
Author: User

I recently added support for Docker container logs in the Log collection feature. This article simply talks about strategy selection and how to handle it.

About the container log for Docker

I'm not going to say much about Docker, it's going to be hot for two years. Recently I was also deploying some components of the log system into Docker. Obviously, a lot of things will be restricted by containers after they run in containers, such as log files.

When a component is deployed to Docker, you can view the log of this component in the standard output stream (command line) with the following command:

${containerName}

Log shape such as:

But this approach does not allow you to get logs in real time and collect them. But Docker is still friendly, and it keeps these log files in a file system with the container ID as its file name. If you are installing the standard, then it should be in the file system at the following location:

/var/lib/docker/containers/${fullContainerId}/${fullContainerId}-json.log

How is this fullContainerId supposed to be achieved? Simply, you can view the full container-id with the following command:

dockerps--no-trunc

Then use the VI command to view the log file. However, file-based logs and logs based on the standard output stream are distinguished, and the difference is that file-based logs are in JSON form and have a line of standard output streams as the log interval. Shaped like:

This is equivalent to the two-layer log format, the outer layer is Docker encapsulated, the format is fixed, while the inner layer is different from the specific components. The outside format is actually useless to us, but we have to parse the outer log before returning to the context in which we collect the component format.

If this is one of the problems that Docker brings to our log collection, here's a more tricky question: the correlation of multiple lines of log . One of the more common examples is the program's exception stack (stacktrace). Because these exception stacks are multi-line output in the standard output stream, an exception stack in the Docker log is disassembled in multiple logs just like the example log above.

In fact, in the log collection based on non-Docker log files, we have supported the collection of multi-line correlation logs that are dominated by the exception stack, but one problem now is that Docker has not only split the correlation log into multiple lines, but also wrapped its own format outside, This causes us to not get a real log delimiter at all without parsing, and the log delimiter is used to differentiate between the true log separation bounds in the multi-line log content. For example, the log4j log, we determine whether a row prefix is the [ beginning of a log or should be appended to the previous log.

Processing scenario client does not resolve

Before we encountered the Docker container log, we followed the rules: Theagent is only responsible for collecting, no parsing , parsing in storm. For the above-mentioned Docker container of the multi-line correlation log, the client does not resolve the natural no way to identify the correlation, then it can only be collected line by row, and then on the service side of the resolution. If you resolve on the server side, you must ensure the order of the logs in the same log file.

    • Queue-based sequencing

I'm talking about this. This queue is a message queue in the message middleware that is temporarily present after the log collection. This ensures that the log is guaranteed to be sequential before parsing, but the cost is obviously high, and for a single queue on one node, the multi-log type on multiple nodes will quickly increase the queue in the message middleware, and the performance overhead will be very large. There is also the problem is that simply ensure that the message queue is not orderly enough, but also to let consumers (such as storm) processing logic for this queue is single, if a consumer responsible for multiple different log queues, then the single file can not recognize the log order. But if consumers are dealing with journal queues one-on-one, consumers like storm will have to reduce the scalability of new log types. Because Storm's real-time processing is based on topology, a topology contains both input (spout) and output logic. In this case, each time a log queue is added, the topology must be restarted (in order to identify the new spout).

    • Order of ordering based on self-increment sequence

If you do not maintain the order of the logs in a single log file through external data structures, you can only identify the order of the logs by adding serial numbers to each log. This approach allows the logs to be unordered and mixed in the message middleware. But it also has drawbacks:

(1) A single serial number is not enough, additional identifiers are required to differentiate between similar and different host logs (clustered environment)

(2) In order to get the associated log, the log must first drop the database, and then use the sorting mechanism to restore the original order, and then sequentially merge or single processing

The above two points are tricky.

Client resolution Docker log format

The above analysis of the client does not resolve the problem, the other approach is client-side resolution. Because the format of Docker is fixed, this is a relatively provincial matter, we can choose to do only the outer layer parsing, that is, the format of the Docker container log parsing, so as to restore the original log (note here the original log or plain text), and get the original log, Multiple rows of correlation logs can be parsed based on the original log delimiter, and no other problems exist. But there is no doubt that this requires a custom log collector.

Flume's Custom

Flume the read logic component of the log EventDeserializer is called, and here we MultiLineDeserializer are using custom-based LineDeserializer .

First we define a configuration item to identify whether the log was generated by Docker:

wrappedByDocker = true

Next, we define its corresponding Java Bean according to Docker's JSON format:

     Public Static classDockerlog {PrivateString log;PrivateString stream;PrivateString time; Public Dockerlog() {        } PublicStringGetLog() {returnLog } Public void Setlog(String log) { This. log = log; } PublicStringGetStream() {returnStream } Public void SetStream(String Stream) { This. stream = stream; } PublicStringGetTime() {returnTime } Public void settime(String time) { This. Time = time; }    }

Then, when we read a row, if the log was generated by Docker, we first use Gson to deserialize it into a Java object, and then take the log field we care about to get the original log text, the next process is the same as the original.

in.tell();String preReadLine = readSingleLine();ifnullreturnnull;    //if the log is wrapped by docker log format,    //should extract origin log firstly    if (wrappedByDocker) {        DockerLog dockerLog = GSON.fromJson(preReadLine, DockerLog.class);        preReadLine = dockerLog.getLog();    }

This allows the agent to collect the log is the original log, but also to ensure that the subsequent consistent parsing logic.

Complete custom open source for flume in github/flume-customized.

Log System collection of Docker container logs based on flume

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.