We have a java service that needs to open many files. Previously we set the limit of open files to 100,000. It turns out that that is not sufficient. We are considering setting the number to 200,000. I am wondering what would be the downside of setting a large number of open files.
2 Answers
Since the Linux kernel stores the file descriptor as an integer this will result in 200,000 integers somewhere in the memory. Assuming four bytes per integer that's less than a megabyte, hardly anything on a modern hardware. I like this answer which explains how a file descriptor works in Unix systems.
The limit is there to stop a rogue process from taking up all the resources. If you have a legitimate reason to open 200,000 files it's not a problem.

- 43,645
- 9
- 78
- 111
The first thing you should do is investigate for resource leaks. Chances are excelllent you don't need that many files at the same time, but the app is just leaking file descriptors over time. Huge part about having the limit is about catching the problem before it kills the machine.
That said, assuming there is a legit reason to bump the limit, there is no problem. Modern systems easily have hundred of thousands of open files in total and don't break a sweat. As for having them in one table, it is not a problem either for the most part. However, the more threads you are opening and closing fds at the same time, the more you run into lock contention which may or may not turn out to be a problem.
tl;dr investigate for leaks first, bump without fear