32

When I try to store anything in elasticsearch, An error says that:

TransportError(403, u'cluster_block_exception', u'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];')

I already inserted about 200 millions documents in my index. But I don't have an idea why this error is happening. I've tried:

curl -u elastic:changeme -XPUT 'localhost:9200/_cluster/settings' -H 'Content-Type: application/json' -d '{"persistent":{"cluster.blocks.read_only":false}}'

As mentioned here: ElasticSearch entered "read only" mode, node cannot be altered

And the results is:

{"acknowledged":true,"persistent":{"cluster":{"blocks":{"read_only":"false"}}},"transient":{}}

But nothing changed. what should I do?

ehsan shirzadi
  • 4,709
  • 16
  • 69
  • 112
  • Possible duplicate of [Elasticsearch error: cluster\_block\_exception \[FORBIDDEN/12/index read-only / allow delete (api)\], flood stage disk watermark exceeded](https://stackoverflow.com/questions/50609417/elasticsearch-error-cluster-block-exception-forbidden-12-index-read-only-all) – kenorb Oct 07 '19 at 16:41

7 Answers7

60

Try GET yourindex/_settings, this will show yourindex settings. If read_only_allow_delete is true, then try:

PUT /<yourindex>/_settings
{
  "index.blocks.read_only_allow_delete": null
}

I got my issue fixed.

plz refer to es config guide for more detail.

The curl command for this is

curl -X PUT "localhost:9200/twitter/_settings?pretty" -H 'Content-Type: application/json' -d '
{
  "index.blocks.read_only_allow_delete": null
}'
coder
  • 906
  • 1
  • 12
  • 19
truman liu
  • 642
  • 6
  • 7
  • 1
    I did it and the response was: {"acknowledged": true}. same result: "'TransportError(403, u'cluster_block_exception', u'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];')'"]['/home/ehsan/dev/bigADEVS/scripts/pubmed_tokenizer.py', '43', "es.index(index='pubmed_tokens', doc_type='tokens', body=doc)", "'TransportError(403, u'cluster_block_exception', u'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];')'"] – ehsan shirzadi Jan 02 '18 at 11:02
  • 4
    plz read the es guide in the link. Indexes become read only because there is no more space on you disk. Confirm that with df -h. If you use 95% of your disk space.The es server will turn all indexes into read only mode every 30 seconds. If no more space left you need release enough space or change the es config as the guide says. – truman liu Jan 02 '18 at 15:50
  • This was the problem, but I moved elastic folder to another partition (simply moving the folder). Cluster health is yellow and It's read only. What should I do? – ehsan shirzadi Jan 03 '18 at 05:09
  • 2
    After you get more than 15% free space on you disk,you should unset the read-only mode mannully. PUT yourindex/_settings `{ "index.blocks.read_only_allow_delete": null }` – truman liu Jan 03 '18 at 11:16
29

Last month I facing the same problem, you can try this code on your Kibana Dev Tools

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

I hope it helps

Imran273
  • 411
  • 6
  • 7
6

I had faced the same issue when my disk space was full,

please see the steps that I did

1- Increase the disk space

2- Update the index read-only mode, see the following curl request

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

Jamsheer
  • 3,673
  • 3
  • 29
  • 57
  • 1
    This saved me! Updates **all** indices that are put in _read only_-mode. For check if any index is in _read only_-mode: `curl localhost:9200/_cat/_settings/index.blocks*` – kod kristoff Apr 25 '19 at 11:40
1

This happens because of the default watermark disk usage of Elastic Search. Usually it is 95% of disk size.

This happens when Elasticsearch thinks the disk is running low on space so it puts itself into read-only mode.

By default Elasticsearch's decision is based on the percentage of disk space that's free, so on big disks this can happen even if you have many gigabytes of free space.

The flood stage watermark is 95% by default, so on a 1TB drive you need at least 50GB of free space or Elasticsearch will put itself into read-only mode.

For docs about the flood stage watermark see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/disk-allocator.html.

Quoted from part of this answer

One solution is to disable it enitrely (I found it useful in my local and CI setup). To do it run the 2 commands:

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
Vedant Agarwala
  • 18,146
  • 4
  • 66
  • 89
0

Tagging into this later on as I just encountered the problem myself - I accomplished the following steps. 1) Deleted older indexes to free up space immediately - this brought me to around 23% free. 2) Update the index read-only mode.

I still had the same issue. I checked the Dev Console to see what might be locked still and none were. Restarted the cluster and had the same issue.

Finally under index management I selected the indexes with ILM lifecycle issues and picked to reapply ILM step. Had to do that a couple of times to clear them all out but it did.

Chasester
  • 682
  • 3
  • 16
  • 33
-1

The problem may be a disk space problem, i had this problem despite i cleaned many space my disk, so, finally i delete the data folder and it worked: sudo rm -rf /usr/share/elasticsearch/data/

SalahAdDin
  • 2,023
  • 25
  • 51
-1

This solved the issue; PUT _settings { "index": { "blocks": { "read_only_allow_delete": "false" }
}