2

I have a setup with Fluentd and Elasticsearch running on a Docker engine. I have swarms of services which I would like to log to Fluentd.

What I want to do is create a tag for each service that I run and use that tag as an index in Elasticsearch. Here's the setup that I have:

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

<match docker.service1>
  @type elasticsearch
  host "172.20.0.3"
  port 9200
  index_name service1
  type_name fluentd
  flush_interval 10s
</match>

<match docker.service2>
  @type elasticsearch
  host "172.20.0.3"
  port 9200
  index_name service2
  type_name fluentd
  flush_interval 10s
</match>

and so forth.

It would be annoying to have to include a new match tag for every single service I create, because I want to be able to add new service without updating my fluentd configuration. Is there a way to do something like this:

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

<match docker.**>
  @type elasticsearch
  host "172.20.0.3"
  port 9200
  index_name $(TAG)
  type_name fluentd
  flush_interval 10s
</match>

Where I use a $(TAG) variable to indicate that I want the Tag name to be the name of the index?

I've tried this from an answer I found here: ${tag_parts[0]}. This was printed literally as my index. So my index was "${tag_parts[0]}".

Thanks in advance.

Community
  • 1
  • 1
Brian Woodbury
  • 159
  • 1
  • 8

2 Answers2

4

I figured out that I needed to import the other Elasticsearch plugin. Here's an example of a match tag that I used:

<match>
   @type elasticsearch_dynamic
   host "172.20.0.3"
   port 9200
   type_name fluentd
   index_name ${tag_parts[2]}
   flush_interval 10s
   include_tag_key true
   reconnect_on_error true
</match>

I've imported the @elasticsearch_dynamic plugin instead of the @elasticsearch plugin. Then, I can use the ${tag_parts} thing.

The include_tag_key will include the tag in the json data.

It helps to read the documentation

Brian Woodbury
  • 159
  • 1
  • 8
  • I want to do the same but elasticsearch_dynamic is flagged as deprecated and they discourage this approach now. Do you still have this need and have you found a better approach? – jishi Nov 29 '21 at 15:38
1

I had the same problem, and the solution provided here is being deprecated. What I ended up doing was this:

Add a transform filter that adds the index name you want as a key on the record

<filter xxx.*>
  @type record_transformer
  enable_ruby true
  <record>
    index_name ${tag_parts[1]}-${time.strftime('%Y%m')}
  </record>
</filter>

and then in the elasticsearch output you configure

<match xxx.*>
  @type elasticsearch-service
  target_index_key index_name
  index_name fallback-index-%Y%m

the fallback index_name here will be used if a record is missing the index_name key, but that should never happen.

jishi
  • 24,126
  • 6
  • 49
  • 75