2

I have one Microservice with Jhipster version 5v and a image 2.4.1 of the ElasticSearch running in vagrant centos 7v. The two image are running but the operations of save and search can not reach the Elasticsearch image.

docker-compose:

service-app:
    image: "..."
    depends_on:
      - service-mysql
      - service-elasticsearch
      - kafka
      - zookeeper
      - jhipster-registry
    environment:
      - SPRING_PROFILES_ACTIVE=dev,swagger
      - SPRING_CLOUD_CONFIG_URI=http://admin:admin@jhipster-registry:8761/config
      - SPRING_DATASOURCE_URL=jdbc:mysql://service-mysql:3306/service?useUnicode=true&characterEncoding=utf8&useSSL=false
      - SPRING_DATA_CASSANDRA_CONTACTPOINTS=cassandra
      - JHIPSTER_SLEEP=30
      - JHIPSTER_LOGGING_LOGSTASH_HOST=jhipster-logstash
      - JHIPSTER_LOGGING_LOGSTASH_PORT=5000
      - SPRING_DATA_ELASTICSEARCH_CLUSTER-NAME=SERVICE
      - SPRING_DATA_ELASTICSEARCH_CLUSTER_NODES=service-elasticsearch:9300
      - SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS=kafka
      - SPRING_CLOUD_STREAM_KAFKA_BINDER_ZK_NODES=zookeeper
      - EUREKA_CLIENT_SERVICEURL_DEFAULTZONE=http://admin:admin@jhipster-registry:8761/eureka
    ports:
      - 60088:8088
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "10"

  service-elasticsearch:
    image: ...
    volumes:
      - service-elasticsearch:/usr/share/elasticsearch/data/
    environment:
      - network.host=0.0.0.0
      - cluster.name=service
      - discovery.type=single-node
      - CLUSTER_NAME=SERVICE
    logging:
      driver: "json-file"
      options:
        max-size: "100m" 
        max-file: "10"

application_dev.yml:

    data:
        elasticsearch:
            properties:
                path:
                    home: target/elasticsearch

application_prod:

    data:
        jest:
            uri: http://localhost:9200

domain:

Stickman77
  • 37
  • 5
  • did you verify the ES running – Amit Jul 04 '19 at 11:11
  • Yes, i check the logs and is running,but does not react when it is done operations. It's always print this: "[2019-07-04 11:27:20,746][INFO ][cluster.routing.allocation.decider] [Gaia] rerouting shards: [high disk watermark exceeded on one or more nodes] [2019-07-04 11:27:50,747][WARN ][cluster.routing.allocation.decider] [Gaia] high disk watermark [90%] exceeded on [eZvec2BWSuuyDYdi8OrdQA][Gaia][/usr/share/elasticsearch/data/SERVICE/nodes/0] free: 2.2gb[5.4%], shards will be relocated away from this node" – Stickman77 Jul 04 '19 at 11:28
  • were u able to resolve the issue – Amit Jul 05 '19 at 06:48

1 Answers1

1

The issue is that one of your ES node in your cluster is running on low disk space, hence you are getting this exception.

Please make sure that you as clean up the disk space on the ES nodes on which you are getting the exception. I have faced this issue 2-3 times and it does not depend on the Elasticsearch index size, hence even you might have a very small index on large disk(let's suppose 2 TB) but if you don't have a free disk space more than 10% (which is almost 200 GB, which is huge) still you will get this exception and you need to clean up your disk space.

Amit
  • 30,756
  • 6
  • 57
  • 88