6

I want to be able to have some of the fields that get generated by logstash logback encoder to be wrapped within another field. Can this be done by the XML configuration inside of logback-spring.xml or do I have to implement some class and then refer to this in the configuration?

I tried reading about implementing the Factory and Decorator methods but it didn't seem to get me anywhere.

<appender name="FILE"
    class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>/Users/name/dev/test.log
    </file>
    <rollingPolicy
        class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
        <!-- daily rollover -->
        <fileNamePattern>/Users/name/dev/log/test.%d{yyyy-MM-dd}.log
        </fileNamePattern>
        <maxHistory>30</maxHistory>
    </rollingPolicy>
    <encoder class="net.logstash.logback.encoder.LogstashEncoder">
        <customFields>{"component":"webserver","datacenter":"ord"}
        </customFields>

    </encoder>
</appender>

The current JSON I get when something logs is:

{
  "@timestamp": "2019-07-18T18:12:49.431-07:00",
  "@version": "1",
  "message": "Application shutdown requested.",
  "logger_name":     "org.springframework.boot.admin.SpringApplicationAdminMXBeanRegistrar$SpringApplicationAdmin",
  "thread_name": "RMI TCP Connection(2)-127.0.0.1",
  "level": "INFO",
  "level_value": 20000,
  "component": "webserver",
  "datacenter": "ord"
}

What I want it to be is:

{
  "@timestamp": "2019-07-18T18:12:49.431-07:00",
  "@version": "1",
  "component": "webserver",
  "datacenter": "ord",
  "data": {
    "message": "Application shutdown requested.",
    "logger_name": "org.springframework.boot.admin.SpringApplicationAdminMXBeanRegistrar$SpringApplicationAdmin",
    "thread_name": "RMI TCP Connection(2)-127.0.0.1",
    "level": "INFO",
    "level_value": 20000
  }
}

As you can see a select set of fields are wrapped withiin 'data' instead of being in outer later.

Kirit
  • 389
  • 2
  • 3
  • 11
  • A doubt: If you are using the LogstashEncoder inside RollingFileAppender where do you let logstash know that this logs have to be indexed into ELK. I have learnt that to index the log into ELK I must use the LogstashTcpSocketAppender wherein I can specify the logstash destination ip/port. So in your approach above how would the logstash know it has to index these logs? Through a separate file? – raikumardipak May 09 '20 at 18:17

1 Answers1

10

Instead of using net.logstash.logback.encoder.LogstashEncoder, you'll need to use a net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder and configure its set of providers. Use the nestedField provider to create the nested data field.

Configuring LoggingEventCompositeJsonEncoder is more complex than configuring LogstashEncoder, because LoggingEventCompositeJsonEncoder starts with no providers configured, and you have to build it up with all the providers you want. LogstashEncoder is just a subclass of LoggingEventCompositeJsonEncoder with a pre-configured set of providers.

<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
  <providers>
    <timestamp/>
    <version/>
    <pattern>
      <pattern>
        {
          "component": "webserver",
          "datacenter":"ord"
        }
      </pattern>
    </pattern>
    <nestedField>
      <fieldName>data</fieldName>
      <providers>
        <message/>
        <loggerName/>
        <threadName/>
        <logLevel/>
        <callerData/>
        <stackTrace/>
        <context/>
        <mdc/>
        <tags/>
        <logstashMarkers/>
        <arguments/>
      </providers>
    </nestedField>
  </providers>
</encoder>

Be sure to check out the provider configuration documentation for the various configuration options for each provider.

Phil Clay
  • 4,028
  • 17
  • 22
  • 1
    Wow, thanks @Phil. This is exactly what I was looking for. – Kirit Jul 20 '19 at 03:06
  • Getting lower level with this CompositeJsonEncoder, earlier I was able to rename the providers with custom_name is this feature gone now? – Kirit Jul 20 '19 at 03:08
  • I guess the question is: Does 'LoggingEventCompositeJsonEncoder' support 'Customizing Standard Field Names'? – Kirit Jul 20 '19 at 03:24
  • 2
    Yes, but the field names are configured under the provider that emits the field, similar to how the nestedField’s fieldName is configured in the example. – Phil Clay Jul 20 '19 at 03:49
  • Thanks for all of the help @Phil Clay. I was able to achieve what I wanted with your help. I appreciate everything – Kirit Jul 22 '19 at 04:00
  • A doubt: If you are using the LogstashEncoder inside RollingFileAppender where do you let logstash know that this logs have to be indexed into ELK. I have learnt that to index the log into ELK I must use the LogstashTcpSocketAppender wherein I can specify the logstash destination ip/port. So in your approach above how would the logstash know it has to index these logs? Through a separate file? – raikumardipak May 09 '20 at 18:17