317

When trying to post documents to Elasticsearch as normal I'm getting this error:

cluster_block_exception [FORBIDDEN/12/index read-only / allow delete (api)];

I also see this message on the Elasticsearch logs:

flood stage disk watermark [95%] exceeded ... all indices on this node will marked read-only
Sean Hammond
  • 12,550
  • 5
  • 27
  • 35

7 Answers7

505

This happens when Elasticsearch thinks the disk is running low on space so it puts itself into read-only mode.

By default Elasticsearch's decision is based on the percentage of disk space that's free, so on big disks this can happen even if you have many gigabytes of free space.

The flood stage watermark is 95% by default, so on a 1TB drive you need at least 50GB of free space or Elasticsearch will put itself into read-only mode.

For docs about the flood stage watermark see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/disk-allocator.html.

The right solution depends on the context - for example a production environment vs a development environment.

Solution 1: free up disk space

Freeing up enough disk space so that more than 5% of the disk is free will solve this problem. Elasticsearch won't automatically take itself out of read-only mode once enough disk is free though, you'll have to do something like this to unlock the indices:

$ curl -XPUT -H "Content-Type: application/json" https://[YOUR_ELASTICSEARCH_ENDPOINT]:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

Solution 2: change the flood stage watermark setting

Change the "cluster.routing.allocation.disk.watermark.flood_stage" setting to something else. It can either be set to a lower percentage or to an absolute value. Here's an example of how to change the setting from the docs:

PUT _cluster/settings
{
  "transient": {
    "cluster.routing.allocation.disk.watermark.low": "100gb",
    "cluster.routing.allocation.disk.watermark.high": "50gb",
    "cluster.routing.allocation.disk.watermark.flood_stage": "10gb",
    "cluster.info.update.interval": "1m"
  }
}

Again, after doing this you'll have to use the curl command above to unlock the indices, but after that they should not go into read-only mode again.

Sean Hammond
  • 12,550
  • 5
  • 27
  • 35
  • 16
    Hi. But I am getting this error, even when there is enough free space in my system. Are there other reasons which could be reporting this issue? – Sankalpa Timilsina Jan 23 '19 at 01:43
  • 2
    I have the same issue, even though I have 82.43% disk available. I fix it with the curl command but after a few days I am getting the same. – manu Jun 19 '19 at 08:21
  • 1
    @SankalpaTimilsina did you get the answer, I am facing the same issue. – Malik Faiq Apr 23 '20 at 18:15
  • 4
    You might get the following error using curl command `curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number`. It means you don't have Elasticsearch configured for TLS. So just use http instead – kiril Feb 17 '21 at 11:28
  • Make sure you restart elastic search after freeing up diskspace. – Janac Meena Mar 16 '21 at 20:42
  • I tried the Solution 1 and Worked fine. I was using ES7 along for Magento Development. – Harish ST Dec 11 '21 at 18:38
  • For those getting "missing authentication credentials for REST request [/_all/_settings]", just add the elastic credentials as ``-u username:password`` at the end of the curl commands suggested in the replies to this post. – Mathews Edwirds Jan 12 '23 at 17:46
237

By default, Elasticsearch installed goes into read-only mode when you have less than 5% of free disk space. If you see errors similar to this:

Elasticsearch::Transport::Transport::Errors::Forbidden: [403] {"error":{"root_cause":[{"type":"cluster_block_exception","reason":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}],"type":"cluster_block_exception","reason":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"},"status":403}

Or in /usr/local/var/log/elasticsearch.log you can see logs similar to:

flood stage disk watermark [95%] exceeded on [nCxquc7PTxKvs6hLkfonvg][nCxquc7][/usr/local/var/lib/elasticsearch/nodes/0] free: 15.3gb[4.1%], all indices on this node will be marked read-only

Then you can fix it by running the following commands:

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
Payam Khaninejad
  • 7,692
  • 6
  • 45
  • 55
60
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

FROM

https://techoverflow.net/2019/04/17/how-to-fix-elasticsearch-forbidden-12-index-read-only-allow-delete-api/

zaibatsu
  • 709
  • 5
  • 5
  • I'm getting a `{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index [null] and no indices exist"` with your command, any idea ? – Cyril Duchon-Doris Jan 14 '20 at 09:01
  • 1
    Thanks! My disk was running out of space. Even after I freed up some space, the problem still remained. This command solved my problem! – Fred Mar 21 '20 at 13:33
  • This is the correct solution for modern Elasticsearch versions. However, it didn't work with `_all`. I had to apply it to each index manually. – rubik Apr 01 '20 at 10:07
  • @rubik can you please mention how you "apply it to each index manually"? I am new on Elasticsearch and facing same issue where _all is not working. – rom Apr 12 '20 at 01:39
  • 1
    @rom Sure. Just replace `_all` with the index name, and repeat the request for each index. – rubik Apr 12 '20 at 09:46
42

This error is usually observed when your machine is low on disk space. Steps to be followed to avoid this error message

  1. Resetting the read-only index block on the index:

    $ curl -X PUT -H "Content-Type: application/json" http://127.0.0.1:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
    
    Response
    ${"acknowledged":true}
    
  2. Updating the low watermark to at least 50 gigabytes free, a high watermark of at least 20 gigabytes free, and a flood stage watermark of 10 gigabytes free, and updating the information about the cluster every minute

     Request
     $curl -X PUT "http://127.0.0.1:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d' { "transient": { "cluster.routing.allocation.disk.watermark.low": "50gb", "cluster.routing.allocation.disk.watermark.high": "20gb", "cluster.routing.allocation.disk.watermark.flood_stage": "10gb", "cluster.info.update.interval": "1m"}}'
    
      Response
      ${
       "acknowledged" : true,
       "persistent" : { },
       "transient" : {
       "cluster" : {
       "routing" : {
       "allocation" : {
       "disk" : {
         "watermark" : {
           "low" : "50gb",
           "flood_stage" : "10gb",
           "high" : "20gb"
         }
       }
     }
    }, 
    "info" : {"update" : {"interval" : "1m"}}}}}
    

After running these two commands, you must run the first command again so that the index does not go again into read-only mode

Ishaq Khan
  • 929
  • 9
  • 7
9

Only changing the settings with the following command did not work in my environment:

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

I had to also ran the Force Merge API command:

curl -X POST "localhost:9200/my-index-000001/_forcemerge?pretty"

ref: Force Merge API

Waldron
  • 117
  • 2
  • 5
2

Even if the computer storage is revived above 95% the issue will still persist.

Short term solution is to increase kibana limit above 95%.This solution works in Windows only.

a. Create a json file with following parameters


{
  "persistent": {
    "cluster.routing.allocation.disk.watermark.low": "90%",
    "cluster.routing.allocation.disk.watermark.high": "95%",
    "cluster.routing.allocation.disk.watermark.flood_stage": "97%"
  }
}

b.Name it anything ,e.g : json.txt

c.Type following command in command prompt

>curl -X PUT "localhost:9200/_cluster/settings?pretty" -H "Content-Type: application/json" -d @json.txt

d.Following output is received.

{
  "acknowledged" : true,
  "persistent" : {
    "cluster" : {
      "routing" : {
        "allocation" : {
          "disk" : {
            "watermark" : {
              "low" : "90%",
              "flood_stage" : "97%",
              "high" : "95%"
            }
          }
        }
      }
    }
  },
  "transient" : { }
}

e.Create another json file with following parameter


{
  "index.blocks.read_only_allow_delete": null
}

f.Name it anything ,e.g : json1.txt

g.Type following command in command prompt

>curl -X PUT "localhost:9200/*/_settings?expand_wildcards=all" -H "Content-Type: application/json" -d @json1.txt

h.You should get following output

{"acknowledged":true}

i.Restart ELK stack/Kibana and the issue should be resolved.
1

A nice guide from the ELK team:

https://www.elastic.co/guide/en/elasticsearch/reference/master/disk-usage-exceeded.html

It worked for me with ELK 7.x

Dmytro Nesteriuk
  • 8,315
  • 1
  • 18
  • 14