1

I'm trying to run Elasticsearch and Kibana with Docker. I've installed docker on a virtual machine (Ubuntu Server) and I've used the docker-compose.yml, elasticsearch.yml and kibana.yml as mentioned over here: https://stackoverflow.com/a/44005640/1843511

Now first it did started to boot the extra plugin (head_540), which was reachable on http://ip_of_my_vm:9100, but Elasticsearch wasn't and kibana neither. Kibana gave me a "Too many redirects" error, when I was trying to reach http://ip_of_my_vm:5601`. I figured out that it was trying to redirect me to a login page, and probably the reason was that X-Pack is deliverd with the image and automatically enabled (but not as it should be so it seems, because I couldn't open the url's). So I tried disabling it by editing the elasticsearch.yml:

cluster.name: "chimeo-docker-cluster"
node.name: "chimeo-docker-single-node"
network.host: 0.0.0.0
xpack.security.enabled: false

http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: "Authorization"

and kibana.yml:

server.name: kibana
server.host: "0"
xpack.security.enabled: false
xpack.reporting.enabled: false
xpack.monitoring.enabled: false

elasticsearch.url: http://elasticsearch:9200
# elasticsearch.username: "elastic"
# elasticsearch.password: "changeme"
xpack.monitoring.ui.container.elasticsearch.enabled: false

Now I can reach Elasticsearch when booting composer by docker-compose up, but kibana is stuck:

kibana_540       | {"type":"log","@timestamp":"2018-04-27T10:56:26Z","tags":["info","optimize"],"pid":1,"message":"Optimizing and caching bundles for graph, ml, kibana, timelion and status_page. This may take a few minutes"}

Now, besides all those errors like

elasticsearch_540 | [2018-04-27T10:58:12,100][ERROR][o.e.x.m.e.l.LocalExporter] failed to get monitoring watch [h7x_x5GCTjqL2wadFDSM8w_logstash_version_mismatch]
elasticsearch_540 | java.lang.IllegalStateException: watch store not started

It seems that Elasticsearch is actually working, because I can reach the url at http://ip_of_my_vm:9200 but my kibana isn't, nor is the extra plugin.

Anyone who can help me out with this?

Erik van de Ven
  • 4,747
  • 6
  • 38
  • 80

2 Answers2

0

I am newbie about this area but I managed to make it run some times ago. Do you want to dockerize an ELK Stack? Because you seem not to talk about a logstash service here and the error you have is about "logstash_version_mismatch".

Can you provide us your docker-compose.yml to make it clearer ? :)

Marien
  • 75
  • 1
  • 6
  • I will, somehow it just started working. Guess it just took a very long time to optimize and cache the x-pack bundles. Cause I was able to reach the :5601 url, but with an internal server error. But after rebooting docker it all started working! I will provide the yml files – Erik van de Ven Apr 28 '18 at 09:41
0

So it seems it just took quite a long time to optimize the bundles, because I checked the server this morning and everything seems to work. I had to reboot docker, because of an internal server error in Kibana, but everything is working just fine now. These are my YAML files. By the way, I disabled x-pack, because I don't use it and I had some issues with x-pack from the beginning:

docker-compose.yml

version: '2'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:5.4.0
    container_name: elasticsearch_540
    environment:
      - http.host=0.0.0.0
      - transport.host=0.0.0.0
      - "ES_JAVA_OPTS=-Xms4g -Xmx4g"
    volumes:
      - esdata:/usr/share/elasticsearch/data
      - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - 9200:9200
      - 9300:9300
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    mem_limit: 4g
    cap_add:
      - IPC_LOCK
  kibana:
    image: docker.elastic.co/kibana/kibana:5.4.0
    container_name: kibana_540
    environment:
      - SERVER_HOST=0.0.0.0
    volumes:
      - ./kibana.yml:/usr/share/kibana/config/kibana.yml
    ports:
      - 5601:5601
  headPlugin:
    image: mobz/elasticsearch-head:5
    container_name: head_540
    ports:
      - 9100:9100

volumes:
  esdata:
    driver: local

elasticsearch.yml

cluster.name: "chimeo-docker-cluster"
node.name: "chimeo-docker-single-node"
network.host: 0.0.0.0
xpack.security.enabled: false

xpack.graph.enabled: false
xpack.ml.enabled: false
xpack.monitoring.enabled: false
xpack.watcher.enabled: false

http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: "Authorization"

kibana.yml

cluster.name: "chimeo-docker-cluster"
node.name: "chimeo-docker-single-node"
network.host: 0.0.0.0
xpack.security.enabled: false

xpack.graph.enabled: false
xpack.ml.enabled: false
xpack.monitoring.enabled: false
xpack.reporting.enabled: false
xpack.watcher.enabled: false

http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: "Authorization"

Like I mentioned, I took the files from https://stackoverflow.com/a/44005640/1843511, but disabled x-pack in elasticsearch.yml and kibana.yml and I increased the heap size, because it was running out of memory a few times.

EDIT:

I've shared my current cluster configuration over here: https://gist.github.com/ErikvdVen/8207e39b27472361378bd3909aa247ea

Erik van de Ven
  • 4,747
  • 6
  • 38
  • 80