I'm using Elasticsearch 2.3.5 version. I have to recover the complete data from the backup disks. Everything got recovered except 2 shards. While checking logs, I found the following error.
ERROR:
Caused by: java.nio.file.NoSuchFileException: /data/<cluster_name>/nodes/0/indices/index_name/shard_no/index/_c4_49.liv at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) at java.nio.channels.FileChannel.open(FileChannel.java:287) at java.nio.channels.FileChannel.open(FileChannel.java:335) at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:81) at org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:186) at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89) at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89) at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109) at org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:83) at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:73) at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145) at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197) at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:99) at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:435) at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:100) at org.elasticsearch.index.engine.InternalEngine.createSearcherManager(InternalEngine.java:283) ... 12 more
Can anyone suggest why is this happening or anyhow I can skip this particular file?
Thanks in Advance