I have a simple Fluentd-Elasticsearch-Kibana setup which has a very strange behaviour. Fluentd seems to stop sending information to elastic search after 3 hours it is up.
I run everything in a simple docker-compose file
version: '2'
services:
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf:/fluentd/etc
links:
- "elasticsearch"
ports:
- "24225:24225"
- "24225:24225/udp"
elasticsearch:
image: elasticsearch
expose:
- 9200
ports:
- "9200:9200"
kibana:
image: kibana
links:
- "elasticsearch"
ports:
- "5601:5601"
The Fluentd is built following this dockerfile
# fluentd/Dockerfile
FROM fluent/fluentd:v0.12-debian
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-rdoc", "--no-ri", "--version", "1.9.2"]
and it has the following conf file:
<source>
@type forward
port 24225
bind 0.0.0.0
</source>
<match *.**>
@type copy
<store>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key @log_name
flush_interval 1s
</store>
<store>
@type stdout
</store>
</match>
All running in the cloud, and, of course, in the same machine. From other machines/instances, I send my logs, and Fluentd does receive those logs without any issues. The problem is that after every 3 hours, the Fluentd suddenly stops forwarding those logs to my Elasticsearch. No error message, nothing. If I do restart the Fluentd container, everything works for the next 3 hours.
I looked for this kind of behaviour, but could not find anywhere an explanation or someone with a situation that would come close to this one. There was one guy who had something that resembled this problem, but in the end, it had something to do with the Elasticsearch, not Fluentd...