35

I tried to create Kibana and Elasticsearch and it seems that Kibana is having trouble identifying Elasticsearch.

Here are my steps:

1) Create network

docker network create mynetwork --driver=bridge

2) Run Elasticsearch Container

docker run -d -p 9200:9200 -p 9300:9300 --name elasticsearch_2_4 --network mynetwork elasticsearch:2.4

3) Run Kibana Container

docker run -i --network mynetwork -p 5601:5601 kibana:4.6

I get a JSON output when I connect to Elasticsearch via http://localhost:9200/ through my browser.

But when I open http://localhost:5601/ I get

Unable to connect to Elasticsearch at http://elasticsearch:9200.

Alternate Approach,

I still get a similar error when I try

docker run -d -e ELASTICSEARCH_URL=http://127.0.0.1:9200 -p 5601:5601 kibana:4.6

where I get the error

Unable to connect to Elasticsearch at http://127.0.0.1:9200.

My blog post based on the accepted answer: https://gunith.github.io/docker-kibana-elasticsearch/

Gunith D
  • 1,843
  • 1
  • 31
  • 36

3 Answers3

54

There is some misunderstanding about what localhost or 127.0.0.1 means when running a command inside a container. Because every container has its own networking, localhost is not your real host system but either the container itself. So when you are running kibana and pointing the ELASTICSEARCH_URL variable to localhost:9200 the kibana process will look for elasticsearch inside the kibana container which of course isn't running there.

You already introduced some custom network that you referenced when starting the containers. All containers running in the same network can reference each other via name on their exposed ports (see Dockerfiles). As you named your elasticsearch container elasticsearch_2_4, you can reference the http endpoint of elasticsearch as http://elasticsearch_2_4:9200.

docker run -d --network mynetwork -e ELASTICSEARCH_URL=http://elasticsearch_2_4:9200 -p 5601:5601 kibana:4.6

As long as you don't need to access the elasticsearch instance directly, you can even omit mapping the ports 9200 and 9300 to your host.

Instead of starting all containers on their own, I would also suggest to use docker-compose to manage all services and parameters. You should also consider mounting a local folder as volume to have the data persisted. This could be your compose file. Add the networks, if you need to have the external network, otherwise this setup just creates a network for you.

version: "2"

services:

  elasticsearch:
    image: elasticsearch:2.4
    ports:
      - "9200:9200"
    volumes:
      - ./esdata/:/usr/share/elasticsearch/data/

  kibana:
    image: kibana:4.6
    ports:
      - "5601:5601"
    environment:
      - ELASTICSEARCH_URL=http://elasticsearch:9200
Andreas Jägle
  • 11,632
  • 3
  • 31
  • 31
  • 3
    for additional clarity, create a `docker-compose.yml` file with the docker compose snippet at the bottom of the answer, then run `docker-compose up` to standup the stack. – scald Sep 12 '17 at 00:19
  • Isn't 9300 for internal nodes communication within a cluster? If that's the case, even if I don't need to access ES directly, shouldn't I still expose 9300 at least? – Nico Nov 03 '21 at 03:31
5

Test:

docker run -d -e ELASTICSEARCH_URL=http://yourhostip:9200 -p 5601:5601 kibana:4.6

You can test with your host ip or the ip identified by docker0 in ifconfig

Regards

Carlos Rafael Ramirez
  • 5,984
  • 1
  • 29
  • 36
  • 1
    Thanks for the response Carlos, The accepted answer works as suggested by @Andreas.. So the solution is: ``` docker run -d --network mynetwork -e ELASTICSEARCH_URL=http://elasticsearch_2_4:9200 -p 5601:5601 --name kibana_4_6 kibana:4.6 ``` – Gunith D Nov 01 '16 at 07:18
4

I changed network configuration for Kibana container and after this it works fine:

Kitematic Kibana Settings[1]

giokoguashvili
  • 2,013
  • 3
  • 18
  • 37