51

I'm using elasticsearch on my local machine. The data directory is only 37MB in size but when I check logs, I can see:

[2015-05-17 21:31:12,905][WARN ][cluster.routing.allocation.decider] [Chrome] high disk watermark [10%] exceeded on [h9P4UqnCR5SrXxwZKpQ2LQ][Chrome] free: 5.7gb[6.1%], shards will be relocated away from this node

Quite confused about what might be going wrong. Any help?

user247702
  • 23,641
  • 15
  • 110
  • 157
Mandeep Singh
  • 7,674
  • 19
  • 62
  • 104

6 Answers6

38

From Index Shard Allocation:

... watermark.high controls the high watermark. It defaults to 90%, meaning ES will attempt to relocate shards to another node if the node disk usage rises above 90%.

The size of your actual index doesn't matter; it's the free space left on the device which matters.

If the defaults are not appropriate for you, you've to change them.

mark
  • 6,308
  • 8
  • 46
  • 57
  • 1
    Oh! That was quite stupid of me. Thanks! I just checked my hard drive was nearly full – Mandeep Singh May 17 '15 at 19:31
  • Hey does this is related to kibana not accessible using port 5601. I installed elastic search and kibana but i get same error in elastic search log and when i am trying to access kibana using http://localhost:5601/ its not showing anything – Mahesh Malpani Feb 02 '16 at 06:38
  • 1
    @MaheshMalpani not much I can add here, except that I suggest you check your drives free disk space. If this isn't the case I suggest you do a bit more research and create a new question. – mark Feb 03 '16 at 09:43
  • 8
    Used setting "cluster.routing.allocation.disk.threshold_enabled: false". Able to clear the threshold error. – Mahesh Malpani Feb 03 '16 at 09:48
  • 1
    @MaheshMalpani I am facing the same issue. Where (which file) exactly you did the changes for "cluster.routing.allocation.disk.threshold_enabled: false" ??? – Jeetendra Aug 07 '20 at 07:37
  • Hi mark, My vm free size are 110GB, but my es disk size only 58GB. How do I overcome this problem? Is this still shard allocation problem? because this is empty and freshly installed. I hope you still responding this. Thank you – yuliansen Apr 27 '21 at 09:25
  • I am having this problem too, eventhough my data directory is only 40MB and I have lots of free space on my HD (153GB). – Alain Désilets Oct 04 '22 at 21:08
31

To resolve the issue in which, the log is recorded as:

high disk watermark [90%] exceeded on [ytI5oTyYSsCVfrB6CWFL1g][ytI5oTy][/var/lib/elasticsearch/nodes/0] free: 552.2mb[4.3%], shards will be relocated away from this node

You can update the threshold limit by executing following curl request:

curl -XPUT "http://localhost:9200/_cluster/settings" \
 -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster": {
      "routing": {
        "allocation.disk.threshold_enabled": false
      }
    }
  }
}'
James Mishra
  • 4,249
  • 4
  • 30
  • 35
Jaideep Ghosh
  • 634
  • 8
  • 11
  • 2
    Just had to append `-H 'Content-Type: application/json'` too – millisami Jul 22 '19 at 09:49
  • That depends on your use-case. By default you can use it in described manner. Happy Coding :) – Jaideep Ghosh Jul 23 '19 at 13:35
  • 1
    @JaideepGhosh I used this command `curl -XPUT -H "Content-Type: application/json" https://[YOUR_ELASTICSEARCH_ENDPOINT]:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' ` , it worked but after a certain time , its out of space again for my index and again I have to hit the command... is there anyway to fix this permenantly? – Mahesh Jul 13 '20 at 10:01
17

this slightly modified curl command from the Elasticsearch 6.4 docs worked for me:

curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
  "transient": {
    "cluster.routing.allocation.disk.watermark.low": "2gb",
    "cluster.routing.allocation.disk.watermark.high": "1gb",
    "cluster.routing.allocation.disk.watermark.flood_stage": "500mb",
    "cluster.info.update.interval": "1m"
  }
}
'

if the curl -XPUT command succeeds, you should see logs like this in the Elasticsearch terminal window:

[2018-08-24T07:16:05,584][INFO ][o.e.c.s.ClusterSettings  ] [bhjM1bz] updating [cluster.routing.allocation.disk.watermark.low] from [85%] to [2gb]
[2018-08-24T07:16:05,585][INFO ][o.e.c.s.ClusterSettings  ] [bhjM1bz] updating [cluster.routing.allocation.disk.watermark.high] from [90%] to [1gb]
[2018-08-24T07:16:05,585][INFO ][o.e.c.s.ClusterSettings  ] [bhjM1bz] updating [cluster.routing.allocation.disk.watermark.flood_stage] from [95%] to [500mb]
[2018-08-24T07:16:05,585][INFO ][o.e.c.s.ClusterSettings  ] [bhjM1bz] updating [cluster.info.update.interval] from [30s] to [1m]

https://www.elastic.co/guide/en/elasticsearch/reference/current/disk-allocator.html

Micah Stubbs
  • 1,827
  • 21
  • 34
3

Its a warning and won't affect anything. The storage processors (SPs) use high and low watermarks to determine when to flush their write caches. The possible solution can be to free some memory

And the warning will disappear. Even with it showing, the replicas will not be assigned to the node which is okay. The elasticsearch will work fine.

Flexo
  • 87,323
  • 22
  • 191
  • 272
Amitesh Ranjan
  • 1,162
  • 1
  • 12
  • 9
  • Memory or disk do you mean? – Emil Feb 07 '17 at 14:01
  • 3
    Contrary to this opinion, this warning will not disappear if there's no more nodes to migrate to. I run a local ES cluster for development and with only 1 node it wasn't able to move it anywhere. It eventually raised an error because my index was read-only due to exceeding the limits that this warning indicates. – Nitrodist Sep 27 '20 at 17:33
3

Instead of percentage I use absolute values and rise values for better space use (in pre-prod):

PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.disk.threshold_enabled": true,
    "cluster.routing.allocation.disk.watermark.low": "1g",
    "cluster.routing.allocation.disk.watermark.high": "500m",
    "cluster.info.update.interval": "5m" 
  }
}

Also I reduce pooling interval to make ES logs shorter ))

gavenkoa
  • 45,285
  • 19
  • 251
  • 303
1

Clear up some space on your hard drive, that should fix the issue. This shall also change the health of your ES clusters from Yellow to green (if you got the above issue, you are most likely to face the yellow cluster health issue as well).

dravit
  • 553
  • 6
  • 16