3

I tried running the official metricbeat docker image as described here (https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-kubernetes.html) on a GCP kubernetes cluster as a deamonset and changed the settings so it should route traffic to the existing elastic search pod, but I keep getting the error:

2018-02-22T14:04:54.515Z    WARN    transport/tcp.go:36 DNS lookup failure "elasticsearch-logging.kube-system.svc.cluster.local": lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host
2018-02-22T14:04:55.516Z    ERROR   pipeline/output.go:74   Failed to connect: Get http://elasticsearch-logging.kube-system.svc.cluster.local:9200: lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host
2018-02-22T14:04:55.517Z    WARN    transport/tcp.go:36 DNS lookup failure "elasticsearch-logging.kube-system.svc.cluster.local": lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host
2018-02-22T14:04:57.517Z    ERROR   pipeline/output.go:74   Failed to connect: Get http://elasticsearch-logging.kube-system.svc.cluster.local:9200: lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host
2018-02-22T14:04:57.519Z    WARN    transport/tcp.go:36 DNS lookup failure "elasticsearch-logging.kube-system.svc.cluster.local": lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host
2018-02-22T14:05:01.519Z    ERROR   pipeline/output.go:74   Failed to connect: Get http://elasticsearch-logging.kube-system.svc.cluster.local:9200: lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host
2018-02-22T14:05:01.532Z    WARN    transport/tcp.go:36 DNS lookup failure "elasticsearch-logging.kube-system.svc.cluster.local": lookup elasticsearch-logging.kube-system.svc.cluster.local: no such host

The hostname is fine, because other pods are successfully pushing data to elastic. Now, after some research this turns out to be an issue of the Golang DNS resolver (not metricbeat itself). Anyone else running into this issue? Anyone a solution?

Techradar
  • 3,506
  • 3
  • 18
  • 28
  • For now it works when using the local service ip instead of the kubernetes dns, but thats quick & dirty and I hope the IP wont change that often :-/ – Techradar Feb 23 '18 at 06:15

2 Answers2

2

We had the same problem and what fixed it was adding this

hostNetwork: true  
dnsPolicy: ClusterFirstWithHostNet  

In the DaemonSet yaml on the same level as the containers tag

1

In my case, the problem was that filebeat was in kube-system namespace while my elasticsearch was in the default namespace. To solve the problem I created a service in the kube-system. To solve the problem I created a service on kube-system namespace to access the elasticsearch I the default namespace.

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: kube-system
spec:
  type: ExternalName
  externalName: elasticsearch.default.svc.cluster.local
  ports:
  - port: 80

this solution I found from here