12

I am trying to create a centralized logging system using fluentd for a docker environment. Currently, i able to send the docker log to fluentd using fluentd docker logging driver which is a much cleaner solution compare to reading the docker log file using in_tail method. However, i am currently facing the issue on multi lines log issue.

enter image description here

As you can see from the picture above, the multi lines log are out of order which is very confusing for user. Is there any way this can be solved?

Thanks.

Cw

cheng wee
  • 303
  • 1
  • 3
  • 11
  • 1
    Just to add some comments on this topic after i did some further research. The out of order issue is due to Fluentd time resolution (no sub second support now). Thanks to this answer [link](http://stackoverflow.com/questions/27928479/fluentd-loses-milliseconds-and-now-log-messages-are-stored-out-of-order-in-elast), i able to get the records display in order and at least user will not be that confuse when reading this log. – cheng wee Sep 21 '15 at 08:17
  • For another solution to the milisecond issue, check this blog post http://work.haufegroup.io/log-aggregation/#timestamp-fix – dutzu Jul 14 '17 at 05:36
  • Do you have a solution yet? I found this link https://www.fluentd.org/guides/recipes/docker-logging about merge multiline log in docker before it send to fluentd, but the implementation is very specific to the log format. – Nextlink Sep 18 '17 at 18:47

3 Answers3

3

Using fluent-plugin-concat pluging helped me in fixing above problem.

Adding these lines in fluent-conf

 <filter **>
  @type concat
  key log
  stream_identity_key container_id
  multiline_start_regexp /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}
  multiline_end_regexp /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}
</filter>

Where my regular expression is checking for DateTimeStamp in Logs where each line starts with and Date and Timestamp (pay attention to "log":"2017-09-21 15:03:27.289) below

2017-09-21T15:03:27Z    tag     {"container_id":"11b0d89723b9c812be65233adbc51a71507bee04e494134258b7af13f089087f","container_name":"/bel_osc.1.bc1k2z6lke1d7djeq5s28xjyl","source":"stdout","log":"2017-09-21 15:03:27.289  INFO 1 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/8.5.6"}
2017-09-21T15:03:28Z    tag     {"container_id":"11b0d89723b9c812be65233adbc51a71507bee04e494134258b7af13f089087f","container_name":"/bel_osc.1.bc1k2z6lke1d7djeq5s28xjyl","source":"stdout","log":"2017-09-21 15:03:28.191  INFO 1 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext"}

Also, I had to add below lines in Dockerfile to install the plugin

RUN ["gem", "install", "fluent-plugin-concat", "--version", "2.1.0"] 
#Works with Fluentd v0.14-debian

Though this regular expression doesn't work well when an exception occurs, but still much better than before. Fluentd Link, for reference.

Abhishek Galoda
  • 2,753
  • 24
  • 38
1

Take a look at multiline parsing in their documentation: http://docs.fluentd.org/articles/parser-plugin-overview#

You basically have to specify a regex that would match the beginning of a new log message and that will enable fluentd to aggregate multiline log events into a single message.

Example for a usual java stacktrace from their docs:

format multiline format_firstline /\d{4}-\d{1,2}-\d{1,2}/ format1 /^(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}) \[(?<thread>.*)\] (?<level>[^\s]+)(?<message>.*)/

dutzu
  • 3,883
  • 13
  • 19
  • 3
    According to fluentd's docs "`multiline` works with only in_tail plugin." which means that when you are using the `@type forward` for input from docker this won't work. – Ash Berlin-Taylor Nov 25 '15 at 11:35
  • @AshBerlin - You can also use it in multiple plugins, the core plugins that support parsing are described here http://docs.fluentd.org/articles/parser-plugin-overview#list-of-core-input-plugins-with-parser-support – dutzu Nov 25 '15 at 14:50
  • @AshBerlin - Also, perhaps it is possible to replace the in_forward plugin with in_tcp, they are basically the same thing only in_forward also listens on UDP. And in_tcp is one of the plugins that support format parsers out-of-the-box – dutzu Nov 25 '15 at 15:59
  • Ah I'll give that a go. Knowing that might also help us deal with the case where we've got our containers producing JSON which gets put as a string in the "log" fields by docker – Ash Berlin-Taylor Nov 25 '15 at 19:04
  • @AshBerlin I haven't gone that route but I would suggest you also look into the fluent-plugin-parser. You can just pass along all the events coming from the docker instance and then try to multiline parse them in the filter before pushing them out – dutzu Nov 26 '15 at 07:28
0

I know this is not and "answer" to the fluentd question. But this guide solves the problem with logstash: http://www.labouisse.com/how-to/2015/09/14/elk-and-docker-1-8

JSON support by adding

    json {
        source => "log_message"
        target => "json"
    }

to his filter after parsing a log line

I never found a solution for fluentd, so went with this solution instead

Updated link

mathiasbn
  • 881
  • 11
  • 21