3

Did anyone manage to get kubernetes.namespace_name as indexName? I tried this and its not working.

index_name ${kubernetes.namespace_name}.%Y%m%d
Minisha
  • 2,117
  • 2
  • 25
  • 56
  • 1
    Have you tried `${record['kubernetes']['namespace_name']}`? Also make sure the key exists in the record. More information [here](https://github.com/uken/fluent-plugin-elasticsearch#dynamic-configuration). – ealain Jun 28 '21 at 13:47

2 Answers2

2

Please follow below mentioned steps for complete installation. I have Added below fluentd.conf configmap file with below line.

logstash_prefix clustername-${record['kubernetes']['namespace_name']}

Fluentd-DaemonSet

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: logging
  labels:
    k8s-app: fluentd-logging
    version: v1
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-logging
      version: v1
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
    spec:
      serviceAccountName: fluentd                           
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch 
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENT_ELASTICSEARCH_USER
            value: "user"
          - name: FLUENT_ELASTICSEARCH_PASSWORD
            value: "password"
          - name: FLUENT_ELASTICSEARCH_CLUSTER_NAME
            value: "clustername"
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: config-fluentd
          mountPath: /fluentd/etc/fluent.conf
          subPath: fluent.conf
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: config-fluentd
        configMap:
          name: fluentd-conf                      
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

Add fluentd.conf file

kubectl create cm config-fluentd --from-file fluentd.conf

<match fluent.**>
    # this tells fluentd to not output its log on stdout
    @type null
</match>

# Fetch all container logs
<source>
  @id kubernetes-containers.log
  @type tail
  path /var/log/containers/*.log
  pos_file /var/log/kubernetes-containers.log.pos
  tag raw.kubernetes.*
  read_from_head true
  <parse>
    @type multi_format
    <pattern>
      format json
      time_key time
      time_format %Y-%m-%dT%H:%M:%S.%NZ
    </pattern>
    <pattern>
      format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
      time_format %Y-%m-%dT%H:%M:%S.%N%:z
    </pattern>
  </parse>
</source>

# Detect exceptions in the log output and forward them as one log entry.
<match raw.kubernetes.**>
  @id raw.kubernetes
  @type detect_exceptions
  remove_tag_prefix raw
  message log
  stream stream
  multiline_flush_interval 5
  max_bytes 500000
  max_lines 1000
</match>

# Concatenate multi-line logs
<filter **>
  @id filter_concat
  @type concat
  key message
  multiline_end_regexp /\n$/
  separator ""
</filter>

# Add records with Kubernetes metadata
<filter kubernetes.**>
  @id filter_kubernetes_metadata
  @type kubernetes_metadata
</filter>

# Fixes json fields for Elasticsearch
<filter kubernetes.**>
  @id filter_parser
  @type parser
  key_name log
  reserve_data true
  remove_key_name_field true
  <parse>
    @type multi_format
    <pattern>
      format json
    </pattern>
    <pattern>
      format none
    </pattern>
  </parse>
</filter>
<match **>
   @type elasticsearch_dynamic
   @id out_es
   @log_level info
     
   include_tag_key true
   host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
   port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
   path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
   scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
   ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
   user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
   password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
   reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'true'}"
   logstash_prefix clustername-${record['kubernetes']['namespace_name']}
   logstash_format true
   type_name fluentd
   buffer_chunk_limit "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
   buffer_queue_limit "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
   flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
   max_retry_wait "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
   disable_retry_limit
   num_threads "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
</match>

Official Repo: https://github.com/fluent/fluentd-kubernetes-daemonset

If you want to divide fluentd.conf file to other files then you can use below annotation in fluentd.conf and add as a configmap and volume in DaemonSet.

Annotation

@include systemd.conf
@include kubernetes.conf

Configmap for above files

Add configmap similar to fluentd-config as a configmap for seperated config files.

Anurag Jain
  • 204
  • 1
  • 5
  • where does the record come from? ${record['kubernetes']['namespace_name']} – Minisha Apr 30 '20 at 01:39
  • Hi, I have modified fluentd.conf file with kuberenetes metadata. Using below lines you will be able to enable the kubernetes metadata and then you will use record(Ruby command) for fetching the data. If you don't have record plugin installed you can add plugin in Gemfile using command "gem 'fluent-plugin-elasticsearch', '4.0.7' " and then create new image and you will have that or better if you can use the one image which I mentioned as that is the latest stable image. – Anurag Jain Apr 30 '20 at 07:23
2

Well, mine here I did by label, I had to add a label on the objects (deployment, statefulset and etc ...) called fluentd: "true"

Example:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
  namespace: your-name-space
  labels:
    app: kafka
    version: "2.6.0"
    component: queues
    part-of: appsbots
    managed-by: kubectl
    fluentd: "true"

In the fluent.conf file, I made a configmap for the fluentd Daemonset as shown below.

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-config
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
    version: v1
    kubernetes.io/cluster-service: "true"
data:
  fluent.conf: |-
    <match fluent.**>
        # this tells fluentd to not output its log on stdout
        @type null
    </match>

    # here we read the logs from Docker's containers and parse them
    <source>
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/containers.log.pos
      tag kubernetes.*
      read_from_head true
      <parse>
        @type json
        time_format %Y-%m-%dT%H:%M:%S.%NZ
      </parse>
      <pattern>
        format /^(?<time>.+) (?<stream>stdout) [^ ]* (?<log>.*)$/
        time_format %Y-%m-%dT%H:%M:%S.%N%:z
      </pattern>
    </source>

    # Kubernetes metadata
    <filter kubernetes.**>
        @type kubernetes_metadata
    </filter>

    <match kubernetes.var.log.containers.**kube-system**.log>
      @type null
    </match>

    # <match kubernetes.var.log.containers.**kube-logging**.log>
    # @type null
    # </match>

    <match kubernetes.var.log.containers.**_istio-proxy_**>
      @type null
    </match>
    

    <filter kubernetes.**>
      @type grep
      <regexp>
        key $["kubernetes"]["labels"]["fluentd"]        
        pattern true
      </regexp>
    </filter>


    <filter kubernetes.**>
      @type grep
      <exclude>
        key $["kubernetes"]["labels"]["fluentd"]        
        pattern false
      </exclude>
    </filter>
    


    #Just an example of what kind of variables can come inside. This part does not apply as config. Do your config with ENV vars
    <match **>
      @type elasticsearch
      @id out_es
      @log_level info
      include_tag_key true
      host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
      port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
      scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
      ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
      user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
      password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
      reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'true'}"
      logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
      logstash_format true
      buffer_chunk_limit "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
      buffer_queue_limit "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
      flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
      max_retry_wait "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
      disable_retry_limit num_threads "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
      <buffer>
           @type file
           path /var/log/fluentd-buffers/kubernetes.system.buffer
           flush_mode interval
           retry_type exponential_backoff
           flush_thread_count 2
           flush_interval 5s
           retry_forever true
           retry_max_interval 30
           chunk_limit_size 2M
           queue_limit_length 32
           overflow_action block
       </buffer>
    </match>

And now the fluentd Deamon file. The image I am using is reference to version v1.10. however it can be the same saw.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
    version: v1
    kubernetes.io/cluster-service: "true"
  annotations:
    configmap.reloader.stakater.com/reload: "fluentd-config"
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-logging
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: didevlab/mutpoc:fluentd #base--> fluent/fluentd-kubernetes-daemonset:v1.10-debian-elasticsearch7-1
        imagePullPolicy: Always
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch-svc.{{ name_corp }}-{{ app_enviroment }}"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENT_UID
            value: "0"
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:        
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: config-fluentd
          mountPath: /fluentd/etc
      terminationGracePeriodSeconds: 30
      volumes:      
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config-fluentd
        configMap:
          name: fluentd-config

\o/Good is that, by boyBR

psicopante
  • 353
  • 3
  • 5