I'm using Spark to write a heavy application requiring 50 machines on a cluster to work on reading/writing on the graph.
At the moment, I'm testing it locally, meaning that there 50 threads starting in parallel. Each one of them initializes the database connection.
For some reason, I get this error:
16/03/16 21:18:19 WARN state.meta: [Jacob "Jake" Fury] failed to find dangling indices
java.nio.file.FileSystemException: /tmp/searchindex/data/elasticsearch/nodes/32/indices: Too many open files
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
at java.nio.file.Files.newDirectoryStream(Files.java:457)
at org.elasticsearch.env.NodeEnvironment.findAllIndices(NodeEnvironment.java:530)
at org.elasticsearch.gateway.local.state.meta.LocalGatewayMetaState.clusterChanged(LocalGatewayMetaState.java:245)
at org.elasticsearch.gateway.local.LocalGateway.clusterChanged(LocalGateway.java:215)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:467)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I'm not using ElasticSearch at all in the configuration file. and all my indexes are composites. Does Titan DynamoDB implementation uses it internally? How to solve this exception?