0

I'm using Elasticsearch 2.3.5 version. I have to recover the complete data from the backup disks. Everything got recovered except 2 shards. While checking logs, I found the following error.

ERROR:

Caused by: java.nio.file.NoSuchFileException: /data/<cluster_name>/nodes/0/indices/index_name/shard_no/index/_c4_49.liv
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
        at java.nio.channels.FileChannel.open(FileChannel.java:287)
        at java.nio.channels.FileChannel.open(FileChannel.java:335)
        at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:81)
        at org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:186)
        at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
        at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
        at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109)
        at org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:83)
        at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:73)
        at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
        at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197)
        at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:99)
        at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:435)
        at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:100)
        at org.elasticsearch.index.engine.InternalEngine.createSearcherManager(InternalEngine.java:283)
        ... 12 more

Can anyone suggest why is this happening or anyhow I can skip this particular file?

Thanks in Advance

Shekar Kola
  • 1,287
  • 9
  • 15
Rajan
  • 392
  • 2
  • 5

1 Answers1

1

Unfortunately restoring Elasticsearch from a filesystem backup is not a reliable way to recover your data, and is expected to fail like this sometimes. You should always use snapshot and restore instead. Your version is rather old, but more recent versions include this warning in the docs (which also applies to your version):

WARNING: You cannot back up an Elasticsearch cluster by simply copying the data directories of all of its nodes. Elasticsearch may be making changes to the contents of its data directories while it is running; copying its data directories cannot be expected to capture a consistent picture of their contents. If you try to restore a cluster from such a backup, it may fail and report corruption and/or missing files. Alternatively, it may appear to have succeeded though it silently lost some of its data. The only reliable way to back up a cluster is by using the snapshot and restore functionality.

It is possible that the restore has silently lost data in other shards too, there's no way to tell. Assuming you don't also have a snapshot of the data held in the lost shards, the only way to recover it is to reindex it from its source.

Dave Turner
  • 1,846
  • 16
  • 27