1

I am trying to Use fluentd to push logs to loki using fluentd-loki plugin. I am not able to make fluentd ingest logs in realtime if the logs exceed 24000 lines/sec. I need help in configuration of fluentd to make it scrape logs fastly and in realtime.

my fluentd_deamonset.yaml

This is the deamonset code

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: loki-fluentd
  labels:
    app: fluentd
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels:
      app: fluentd
      version: v1
      kubernetes.io/cluster-service: "true"
  template:
    metadata:
      labels:
        app: fluentd
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1.4-debian-forward-1 
        command: 
          - /bin/sh 
          - '-c'
          - >
            fluent-gem i fluent-plugin-grafana-loki-licence-fix ;
            fluent-gem i fluent-plugin-parser-cri --no-document ;
            tini /fluentd/entrypoint.sh;
        resources:
          limits:
            memory: 1024Mi
          requests:
            cpu: 1000m
            memory: 1024Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: config
          mountPath: /fluentd/etc
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config
        configMap:
          name: fluentd-config

This is config file fluentd-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-config
  namespace: loki-fluentd
  labels:
    app: fluentd
data: 
  fluent.conf: |
    <source>
      @type tail
      @id in_tail_container_logs
      path /var/log/containers/loggen-*.log
      # exclude_path ["/var/log/containers/fluentd*"]
      pos_file /tmp/fluentd-containers.log.pos
      tag kubernetes.*
      read_from_head true
      <parse>
        @type cri
        time_format %Y-%m-%dT%H:%M:%S.%L%z
      </parse>
    </source>
    <match fluentd.**>
      @type null
    </match>
    <match kubernetes.var.log.containers.**fluentd**.log>
      @type null
    </match>
    <filter kubernetes.**>
      @type kubernetes_metadata
      @id filter_kube_metadata
    </filter>
    <filter kubernetes.var.log.containers.**>
      @type record_transformer
      enable_ruby
      remove_keys kubernetes, docker
      <record>
        app ${ record.dig("kubernetes", "labels", "app") }
        job ${ record.dig("kubernetes", "labels", "app") }
        namespace ${ record.dig("kubernetes", "namespace_name") }
        pod ${ record.dig("kubernetes", "pod_name") }
        container ${ record.dig("kubernetes", "container_name") }
        filename ${ record.dig("kubernetes", "filename")}
        workers ${ record.dig("kubernetes", "worker") }
      </record>
    </filter>
    
    <match kubernetes.var.log.containers.**>
      @type copy
      <label>
        fluentd_worker
      </label>
      <store>
        @type loki
        url "http://loki-url"
        extra_labels {"env":"dev"}
        label_keys "app,job,namespace,pod,container,filename,fluentd_worker,workers"
      <buffer>
        flush_thread_count "#{ENV['FLUENT_LOKI_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
        # flush_interval "#{ENV['FLUENT_LOKI_BUFFER_FLUSH_INTERVAL'] || '1s'}"
        flush_mode "#{ENV['FLUENT_LOKI_BUFFER_FLUSH_INTERVAL'] || 'immediate'}"
        chunk_limit_size "#{ENV['FLUENT_LOKI_BUFFER_CHUNK_LIMIT_SIZE'] || '512k'}"
        queue_limit_length "#{ENV['FLUENT_LOKI_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
        retry_max_interval "#{ENV['FLUENT_LOKI_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
        retry_forever true
      </buffer>
      </store>
      <store>
        @type stdout
      </store>
    </match>

I want to know whether configuration can help me ingest more than 10000+ lines per second/fluentd node.

  • These performance-related links might be helpful: https://docs.fluentd.org/deployment/performance-tuning-single-process and https://docs.fluentd.org/v/0.12/articles/performance-tuning-single-process. – Azeem Nov 28 '22 at 13:52

0 Answers0