0

I have a wierd problem, as it seems to have been working fine until today. I can't tell what's changed since then, however. I run docker-compose up --build --force-recreate and the build fails saying that it can't resolve the host name.

The issue is specifically because of CURL commands inside one of the Dockerfiles:

USER logstash
WORKDIR /usr/share/logstash
RUN ./bin/logstash-plugin install logstash-input-beats

WORKDIR /tmp
COPY templates/winlogbeat.template.json winlogbeat.template.json
COPY templates/metricbeat.template.json metricbeat.template.json

RUN curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_template/metricbeat-6.3.2 -d@metricbeat.template.json
RUN curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_template/winlogbeat-6.3.2 -d@winlogbeat.template.json

Originally, I had those commands running inside of the Elasticsearch Container, but it stopped working, reporting Could not resolve host: elasticsearch; Unknown error

I thought maybe it was trying to do the RUN commands too soon, so moved the process to the Logstash container, but the issue remains. Logstash depends on Elasticsearch, so Elastic should be up and running by the time that the Logstash container is trying to run this.

I've tried deleting images, containers, network, etc but nothing seems to let me run these CURL commands during the build process;

I'm thinking that perhaps the Docker daemon is caching DNS names, but can't figure out how to reset it, as I've already deleted and recreated the network several times.

Can anyone offer any ideas?

Host: Ubuntu Server 18.04

SW: Docker-CE (current version)

ELK stack: All are the official 6.3.2 images provided by Elastic.

Docker-Compose.YML:

version: '2'

services:

  elasticsearch:
    build:
      context: elasticsearch/
    volumes:
      - esdata:/usr/share/elasticsearch/data
      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
#    ports:
#      - "9200:9200"
#      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx512m -Xms512m"
      HOSTNAME: "elasticsearch"
    networks:
      - elk

  logstash:
    build:
      context: logstash/
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
      - ./logstash/pipeline:/usr/share/logstash/pipeline:ro
    ports:
      - "5000:5000"
      - "5044:5044"
      - "5045:5045"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    build:
      context: kibana/
    volumes:
      - ./kibana/config/:/usr/share/kibana/config:ro
# Port 5601 is not exposed outside of the container
# Can be accessed through Nginx Reverse Proxy only
#    ports:
#      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch

  nginx:
    build:
      context: nginx/
    environment:
      - APPLICATION_URL=http://docker.local
    volumes:
      - ./nginx/conf.d/:/etc/nginx/conf.d:ro
    ports:
      - "80:80"
    networks:
      - elk
    depends_on:
      - elasticsearch

  fouroneone:
    build:
      context: fouroneone/
# No direct access, only through Nginx Reverse Proxy
#    ports:
#      - "8181:80"
    networks:
      - elk
    depends_on:
      - elasticsearch

networks:
  elk:
    driver: bridge

volumes:
  esdata:
Lucas Krupinski
  • 682
  • 5
  • 16
  • It would be helpful if you could share relevant section of the docker-comopose.yml as well. – Alessandro Cosentino Jul 30 '18 at 23:10
  • 1
    Hi, I've added the entire docker compose to my post; – Lucas Krupinski Jul 30 '18 at 23:17
  • 1
    `docker build` always runs on the default network and it will never be able to resolve other containers' DNS names; but regardless pushing to a specific Elasticsearch server isn't something you want to do during a reusable image's build cycle. – David Maze Jul 31 '18 at 00:15
  • @LucasKrupinski I believe what you want is to execute the curl commands under CMD and not RUN. See this for the difference: https://stackoverflow.com/q/37461868/825190 – Alessandro Cosentino Jul 31 '18 at 13:54
  • Hmm....Changing to CMD, the build process does continue, though the CURL still fails to process. I think I must have originally used CMD and not actually saw the error occurring; as my test beats clients were still loading the template themselves, I hadn't noticed. RUN still causes it to fail completely. Obviously I need to read the documentation, I'm missing something! But at least I'm copying the templates to the container so I can `docker exec` them after the container builds. – Lucas Krupinski Jul 31 '18 at 14:47
  • So does moving it to RUN cause the same issue? – Alessandro Cosentino Jul 31 '18 at 15:09
  • Yes - Obviously, I'm new at this. Earlier I THOUGHT my `curl` commands were executing properly, but I was seeing `curl` commands that were retrieving files from outside the service; not matter what I do, I can't get containers to resolve to eachother during the build, either RUN or CMD; best I have done at the is point is copy the files into the containers with install scripts to execute once after the build is complete. – Lucas Krupinski Aug 01 '18 at 17:55

1 Answers1

0

Running a curl to elasticsearch is a wrong shortcut as it may not be up, plus Dockerfile may be the wrong place altogether

Also I would not put this script in the Dockerfile but possibly use it to alter the ENTRYPOINT for the image if I really wanted to use Dockerfile (again I would not advise it)

Best to do here is to have a logstash service in docker-file with the image on updated input plugin only and remove all the rest of lines in Dockerfile. And you could have a logstash_setup service which does the setup bits (using logstash image or even cleaner a basic centos image which should have bash and curl installed - since all you do is run a couple of curl commands passing some files)

Script I am talking about might look something like this :

#!/bin/bash
set -euo pipefail
es_url=http://elasticsearch:9200
# Wait for Elasticsearch to start up before doing anything.
until curl -s $es_url -k -o /dev/null; do
    sleep 1
done
#then put your code here
curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_ ...
Julien
  • 134
  • 5
  • Hi! I understand inserting the templates as part of the Elastic build process makes my container less portable, is that the concern? I was thinking about this - creating one-off services to run once the rest of the processes were running, but I thought that would be even more frowned upon. – Lucas Krupinski Jul 31 '18 at 14:49
  • The concern is that the curl has no effect on the logstash image which is built so it should not be in Dockerfile to create a logstash image. What you could do is put it in the docker-compose.yml as a script in a centos based image - or possibly just as suitable you can simply use volumes in for logstash service and then in logstash config make use of elasticsearch output plugin ability to install templates : https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-template – Julien Jul 31 '18 at 20:21
  • Thanks - Until I figure this out more, I'm just copying the templates into the container during the build process, along with a shell script to run the curl commands once manually; The reason I'm doing this within the Logstash Container is so far, my Elasticsearch container isn't accessible outside Docker (Ports 9200 and 9300 aren't open), so I can't curl from the host during setup, it has to be done from within the service, I believe... – Lucas Krupinski Aug 01 '18 at 17:46