15

what is the upper limit of file-descriptor that can be used in any Linux system (specifically ubuntu 10.04)?

I am using Ubuntu 10.04 (64-bit) and my CPU architecture for server is x86_64 and for client it is i686. Right now I had increased my fd-limit to 400000.

  • What can be the possible side-effects of using large no. of file descriptors?
  • How can I know about the no. file-descriptor that is used by any process?

Thnx

cHao
  • 84,970
  • 20
  • 145
  • 172
nebi
  • 727
  • 3
  • 9
  • 24
  • Do you want per process or system wide fd limits? What is the reason, for increasing the fd limit? – askmish Oct 19 '12 at 09:18
  • May be both. I just want my client application ie httperf to generate request at higher rate say 30000 conn/sec for 300 secs – nebi Oct 19 '12 at 09:44
  • Well, in that case, you are good with using ulimit for setting per process fd limits and as in my answer, you can use file-max or file-nr(based on which one suits you better) for system-wide fd limits. You might also want to increase the `/proc/sys/kernel/pid_max` limit if you are going to run several processes. – askmish Oct 19 '12 at 10:04

2 Answers2

14

You want to look at /proc/sys/fs/file-max instead.

From the recent linux/Documentation/sysctl/fs.txt:

file-max and file-nr:

The kernel allocates file handles dynamically, but as yet it doesn't free them again.

The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit.

Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles.

Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached".

The 2.6 kernel uses a rule of thumb to set file-max based on the amount of memory in the system. A snippet from fs/file_table.c in the 2.6 kernel:

/*
 * One file with associated inode and dcache is very roughly 1K.
 * Per default don't use more than 10% of our memory for files. 
 */ 

n = (mempages * (PAGE_SIZE / 1024)) / 10;
files_stat.max_files = max_t(unsigned long, n, NR_FILE);

The files_stat.max_files is the setting of fs.file-max. This ends up being about 100 for every 1MB of ram.(10%)

Ry-
  • 218,210
  • 55
  • 464
  • 476
askmish
  • 6,464
  • 23
  • 42
  • In this file /proc/sys/fs/file-max, I had increased the file-max value to 400000. Can I increase more? If yes, then how much? – nebi Oct 19 '12 at 09:48
  • The value you need to increase it to will depend on exactly what kind of server you are running and what the load is it on. In general, you should just increase it until you don't run out of file handles any more. Check the updated answer for more details. – askmish Oct 19 '12 at 10:20
4

Each file descriptor takes up some kernel memory, so at some point you'll exhaust it. That being said, up to a hundred thousand file descriptors are not unheard of for server deployments where event-based (epoll on Linux) server architectures are used. So 400k is not completely unreasonable.

For the second questions, see /proc/PID/fd/ or /proc/PID/fdinfo directories.

janneb
  • 36,249
  • 2
  • 81
  • 97