39

I have added new mappings (mainly not_analyzed versions of existing fields) I now have to figure out how to reindex the existing data. I have tried following the guide on elastic search website but that is just too confusing. I have also tried using plugins (elasticsearch-reindex, allegro/elasticsearch-reindex-tool). I have looked at ElasticSearch - Reindexing your data with zero downtime which is a similar question. I was hoping to not have to rely on external tools (if possible) and try and use bulk API (as with original insert)

I could easily rebuild the whole index as it's a read only data really but that wont really work in the long term if I should want to add more fields etc etc when I'm in production with it. I wondered if there was anyone who knows of an easy to understand/follow solution or steps for a relative novice to ES. I'm on version 2 and using Windows.

Community
  • 1
  • 1
metase
  • 1,169
  • 2
  • 16
  • 29
  • What point version of ElasticSearch are you using? If you are using 2.3, the native _reindex api is available. It can do precisely what you're looking for. I'm not sure which guide you are referring to ("the guide on elastic search website") but this is the docs on the reindex api: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html If I'm not mistaken, you can reindex into the same index, effectively leaving the data in place. There are document version issues you have to be aware of though. – Jeff Gandt Jul 18 '16 at 17:23
  • Yeah I had this problem some months ago but I too noticed the reindex API being available... Wasn't able to verify if you can reindex into the same index – metase Jul 19 '16 at 19:46
  • 1
    It seems you cannot reindex into the same index – metase Jul 21 '16 at 21:19
  • I've the same problem. You can check this [answer](https://stackoverflow.com/questions/45266969/reindexing-using-nest-v5-4-elasticsearch). – NatsuDragonEye Aug 07 '17 at 11:04
  • Here is a small process for creating new mappings on an existing index (with re-index): https://codeburst.io/modify-elasticsearch-mappings-and-settings-without-downtime-223911c0e521 – rap-2-h Nov 05 '18 at 17:06

5 Answers5

25

Re-indexing means to read the data, delete the data in elasticsearch and ingest the data again. There is no such thing like "change the mapping of existing data in place." All the re-indexing tools you mentioned are just wrappers around read->delete->ingest.
You can always adjust the mapping for new indices and add fields later. All the new fields will be indexed with respect to this mapping. Or use dynamic mapping if you are not in control of the new fields.
Have a look at Change default mapping of string to "not analyzed" in Elasticsearch to see how to use dynamic mapping to get not_analyzed fields of strings.

Re-indexing is very expensive. Better way is to create a new index and drop the old one. To achieve this with zero downtime, use index alias for all your customers. Think of an index called "data-version1". In steps:

  • create your index "data-version1" and give it an alias named "data"
  • only use the alias "data" in all your client applications
  • to update your mapping: create a new index (with the new mapping) called "data-version2" and put all your data in (you can use the _reindex API for that)
  • to switch from version1 to version2: drop the alias "data" on version1 and create an alias "data" on version2 (or first create, then drop). the time in between those two steps your clients will have no (or double) data. but the time between dropping and creating an alias should be so short your clients shouldn't recognize it.

It's good practice to always use aliases.

dtrv
  • 693
  • 1
  • 6
  • 14
  • Thanks for replying. I wanted to lean more towards the "zero downtime" approach. I can push in another dataset in again which will take 15-20 mins with a new version of mapping with both analyzed and not analyzed fields present.(that is a back up plan). Really I wanted to explore the other option of not having to do that when I'm in production – metase Nov 22 '15 at 22:35
  • You can add a new mapping only if you create a new index - sorry that wasn't clear in my post. I added this above. Most users have separate indices for each period in time (lets say daily). Then new fields and/or new mappings are applied on all new created indices. I also add some thought for zero downtime in the post. – dtrv Nov 24 '15 at 09:16
  • @dtrv do you know what would happen if new data was indexed by clients into "data-version1" while the reindexing command was running ? Will it get also picked up ? – cah1r Sep 06 '22 at 10:22
  • @cah1r Clients should use alias "data" and not index into "version1". Indexing into an alias is possible if there is only one index with that alias. And with that it's clear where the new data get indexed: into index "version1" before alias switching and into "version2" afterwards. You could also set the index to read-only to avoid adding new data after the reindex process has been started. – dtrv Sep 07 '22 at 13:12
  • If an alias has more than one index and you want to index via that alias, see property `is_write_index` of the `_alias` API. – dtrv Sep 07 '22 at 13:19
  • @drtv ok :) but the question still stands. You mentioned using Reindexing API to migrate the data. So let's say that you run the command. There is 10 million docs to migrate. The operation will run for 15 minutes. If during those 15 minutes 100 new docs are added to the first index do you know if they will be picked up by Reindexing operation that is in progress ? So in version2 we should have 10 million and 100 docs. Generally after the switch we don't want to be missing any data. – cah1r Sep 08 '22 at 06:46
  • The 100 docs will not be in new index. That's why I suggested to put the old index in read-only mode. – dtrv Sep 09 '22 at 09:18
14

With version 2.3.4 a new api _reindex is available which will do exactly what it says. Basic usage is

{
    "source": {
        "index": "currentIndex"
    },
    "dest": {
        "index": "newIndex"
    }
}
j0k
  • 22,600
  • 28
  • 79
  • 90
metase
  • 1,169
  • 2
  • 16
  • 29
  • 1
    You could reindex from "currentIndex" to a temporary index and then back to "currentIndex". You can use the op_type and version_type parameters to control how you handle duplicates/overwriting data. – Jeff Gandt Jul 22 '16 at 14:28
  • That's what I ended up doing – metase Jul 22 '16 at 23:10
5

If you want like me a straight answer to this common and basic problem which is poorly adressed by elastic and the community in general, here is the code that works for me.

Assuming you are just debugging, not in a production environment, and it is absolutely legitimate to add or remove fields because you absolutely don't care about downtime or latency:

# First of all: enable blocks write to enable clonage
PUT /my_index/_settings
{
  "settings": {
    "index.blocks.write": true
  }
}

# clone index into a temporary index
POST /my_index/_clone/my_index-000001  

# Copy back all documents in the original index to force their reindexetion
POST /_reindex
{
  "source": {
    "index": "my_index-000001"
  },
  "dest": {
    "index": "my_index"
  }
}

# Disable blocks write
PUT /my_index/_settings
{
  "settings": {
    "index.blocks.write": false
  }
}

# Finaly delete the temporary index
DELETE my_index-000001
DavidBu
  • 478
  • 4
  • 6
2

Elasticsearch Reindex from Remote host to Local Host example (Jan 2020 Update)

# show indices on this host
curl 'localhost:9200/_cat/indices?v'

# edit elasticsearch configuration file to allow remote indexing
sudo vi /etc/elasticsearch/elasticsearch.yml

## copy the line below somewhere in the file
>>>
# --- whitelist for remote indexing ---
reindex.remote.whitelist: my-remote-machine.my-domain.com:9200
<<<

# restart elaticsearch service
sudo systemctl restart elasticsearch

# run reindex from remote machine to copy the index named filebeat-2016.12.01
curl -H 'Content-Type: application/json' -X POST 127.0.0.1:9200/_reindex?pretty -d'{
  "source": {
    "remote": {
      "host": "http://my-remote-machine.my-domain.com:9200"
    },
    "index": "filebeat-2016.12.01"
  },
  "dest": {
    "index": "filebeat-2016.12.01"
  }
}'

# verify index has been copied
curl 'localhost:9200/_cat/indices?v'
smack cherry
  • 471
  • 3
  • 7
0

I faced same problem. But i couldn't find any resource to update current index mapping and analyzer. My suggestion is to use scroll and scan api and reindex your data to new index with new mapping and new fields.