0

I am facing issue with Cassandra, whenever I try to start Cassandra I am getting Too many files open error.

I have increased the file descriptor to 1000000, still the same error.

[UPDATED]

I went through the debug logs, on start it is opening many sstables. Here are the logs

DEBUG [SSTableBatchOpen:2] 2017-06-20 11:03:40,635 SSTableReader.java:479 - Opening /cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-159921-big (60 bytes) DEBUG [SSTableBatchOpen:1] 2017-06-20 11:03:40,635 SSTableReader.java:479 - Opening /cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-166961-big (49 bytes) DEBUG [SSTableBatchOpen:4] 2017-06-20 11:03:40,635 SSTableReader.java:479 - Opening /cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-181150-big (57 bytes) DEBUG [SSTableBatchOpen:3] 2017-06-20 11:03:40,635 SSTableReader.java:479 - Opening /cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-188190-big (49 bytes) DEBUG [SSTableBatchOpen:2] 2017-06-20 11:03:40,635 SSTableReader.java:479 - Opening /cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-173741-big (54 bytes) DEBUG [SSTableBatchOpen:1] 2017-06-20 11:03:40,635 SSTableReader.java:479 - Opening /cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-159987-big (45 bytes) DEBUG [SSTableBatchOpen:3] 2017-06-20 11:03:40,635 SSTableReader.java:479 - Opening /cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-167027-big (49 bytes) DEBUG [SSTableBatchOpen:4] 2017-06-20 11:03:40,635 SSTableReader.java:479 - Opening /cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-198208-big (53 bytes) DEBUG [SSTableBatchOpen:1] 2017-06-20 11:03:40,636 SSTableReader.java:479 - Opening /cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-177001-big (48 bytes) DEBUG [SSTableBatchOpen:2] 2017-06-20 11:03:40,636 SSTableReader.java:479 - Opening /cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-184041-big (57 bytes Here are the system logs:

ERROR [SSTableBatchOpen:1] 2017-06-19 19:08:40,175 CassandraDaemon.java:205 - Exception in thread Thread[SSTableBatchOpen:1,5,main]
java.lang.RuntimeException: java.io.FileNotFoundException: /cassandra/cass/data/crownit/activitylog-60fcc250bc7211e6995a87b62bcc4eac/.controller_idx/mc-1033-big-CompressionInfo.db (Too many open files)
        at org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:127) ~[apache-cassandra-3.0.9.jar:3.0.9]
        at org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:91) ~[apache-cassandra-3.0.9.jar:3.0.9]
        at org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:125) ~[apache-cassandra-3.0.9.jar:3.0.9]
        at org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.complete(CompressedSegmentedFile.java:132) ~[apache-cassandra-3.0.9.jar:3.0.9]
        at org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:177) ~[apache-cassandra-3.0.9.jar:3.0.9]
        at org.apache.cassandra.io.util.SegmentedFile$Builder.buildData(SegmentedFile.java:193) ~[apache-cassandra-3.0.9.jar:3.0.9]
        at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:745) ~[apache-cassandra-3.0.9.jar:3.0.9]
        at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:706) ~[apache-cassandra-3.0.9.jar:3.0.9]
        at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:492) ~[apache-cassandra-3.0.9.jar:3.0.9]
        at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:375) ~[apache-cassandra-3.0.9.jar:3.0.9]
        at org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:534) ~[apache-cassandra-3.0.9.jar:3.0.9]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_101]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_101]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_101]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
Caused by: java.io.FileNotFoundException: /cassandra/cass/data/crownit/activitylog-60fcc250bc7211e6995a87b62bcc4eac/.controller_idx/mc-1033-big-CompressionInfo.db (Too many open files)
        at java.io.FileInputStream.open0(Native Method) ~[na:1.8.0_101]
        at java.io.FileInputStream.open(FileInputStream.java:195) ~[na:1.8.0_101]
        at java.io.FileInputStream.<init>(FileInputStream.java:138) ~[na:1.8.0_101]
        at java.io.FileInputStream.<init>(FileInputStream.java:93) ~[na:1.8.0_101]
        at org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:100) ~[apache-cassandra-3.0.9.jar:3.0.9]
        ... 15 common frames omitted
ERROR [SSTableBatchOpen:1] 2017-06-19 19:08:40,177 JVMStabilityInspector.java:140 - JVM state determined to be unstable.  Exiting forcefully due to:
java.io.FileNotFoundException: /cassandra/cass/data/crownit/activitylog-60fcc250bc7211e6995a87b62bcc4eac/.controller_idx/mc-1033-big-CompressionInfo.db (Too many open files)
Arvind
  • 1,882
  • 1
  • 15
  • 14
  • Check the existence of this file `/cassandra/cass/data/crownit/activitylog-60fcc250bc7211e6995a87b62bcc4eac/.controller_idx/mc-1033-big-CompressionInfo.db` – Ashraful Islam Jun 19 '17 at 13:44
  • file is present – Arvind Jun 19 '17 at 13:47
  • @Arvind seems to be java error rather than cassendra : 1. Try restarting your box, to free space. 2. If, not following 1.try kill your local java threads and restart your IDE 3. https://stackoverflow.com/questions/13706409/java-error-too-many-open-files checkout the link for updating ulimit. I have faced the same error for me restarting worked fine. – anshul Gupta Jun 19 '17 at 13:49
  • Check this http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/troubleshooting/trblshootTooManyFiles_r.html – Ashraful Islam Jun 19 '17 at 13:51
  • 15GB space is available and I have tried restarting the machine, didn't work for me. I have already increased the Ulimit. – Arvind Jun 19 '17 at 13:57
  • Did you use recommended setting for linux http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/troubleshooting/trblshootInsufficientResources_r.html#reference_ds_js4_fdd_2k__recommended-settings-title – Ashraful Islam Jun 19 '17 at 14:04
  • Possible duplicate of [Java Too Many Open Files](https://stackoverflow.com/questions/4289447/java-too-many-open-files) – OrangeDog Jun 19 '17 at 15:03
  • @AshrafulIslam yes I have followed the recommended setting – Arvind Jun 19 '17 at 15:39
  • @Arvind are you maybe defining the OS limits for one user but running Cassandra with a different user (e.g `root`)? Could you run `ulimit -a` and show us the output? – nastra Jun 19 '17 at 15:49
  • @nastra no, I have updated the limits for every user root - memlock unlimited root - nofile 1000000 root - nproc 32768 root - as unlimited * - memlock unlimited * - nofile 1000000 * - nproc 32768 * - as unlimited – Arvind Jun 19 '17 at 16:16
  • @Arvind could you please still double check if those limits are applied correctly by running `ulimit -a`? – nastra Jun 20 '17 at 00:32
  • @nastra Ulimit are correctly set. – Arvind Jun 20 '17 at 05:34

1 Answers1

0

As I cant comment on the previous answer, here a small hint:

How is your cassandra started? How is it installed? It's possible that your ulimit changes are not affecting the user cassandra runs on (double check with ls -l in your data directories to see who ones the files). With debian packages cassandra runs as user cassandra and limits are set as

cassandra01:/etc$ cat security/limits.d/cassandra.conf
# Provided by the cassandra package
cassandra  -  memlock  unlimited
cassandra  -  nofile   100000
cassandra  -  as       unlimited
cassandra  -  nproc    8096
cassandra01:/etc$

How many sstables do you have in your data directory?

Try to find out how many files are open util the crash with something like this:

lsof -n | grep java
Mandraenke
  • 3,086
  • 1
  • 13
  • 26
  • I have also updated the cassandra.conf around 20000 files get opened before crash and I have more than 40000 ssTables – Arvind Jun 20 '17 at 10:03
  • 40000 SSTables are really many - especially if you keep in mind, hat every sstable has some files along with them (index, toc, bloom filter etc.) Can you check what output ´cat /proc/sys/fs/file-max` produces? fs.file-max is the system wide file handle limit - you could hit that one may be. Also how many cloumnfamilies do you have? Are there many many small sstables (*-Data.db)? – Mandraenke Jun 21 '17 at 06:49