14
  1. What is the difference between these 2 linux errors in errno.h? 23 and 24

    I tried 2 different sites but can't understand difference between the two.


    [EMFILE]
    Too many open files.
    [ENFILE]
    Too many files open in system.
    

    # define ENFILE      23  /* File table overflow */
    # define EMFILE      24  /* Too many open files */
    

  2. Also, I am getting errno 24 and socket call failing at 974th time. (AF_INET UDP datagram socket)

    When I did a cat /proc/sys/fs/file-max I am seeing a value of 334076 ulimit -n showing 1024

    Any idea what can be done to increase limit?

Mateusz Piotrowski
  • 8,029
  • 10
  • 53
  • 79
badri
  • 575
  • 2
  • 8
  • 22

2 Answers2

14

For 1) Both error codes are about the situation with too many opened files. EMFILE is too many files opened in your process. ENFILE is too many files opened in the entire system.

Wojtek Surowka
  • 20,535
  • 4
  • 44
  • 51
7

You can increase the maximum number of open files / file descriptors

sysctl -w fs.file-max=100000

Or open

/etc/sysctl.conf

and append/change fs.file-max to the number you need:

fs.file-max = 100000

Then run

sysctl -p

to reload the new settings

If you don't want to set system-wide FD (file-descriptor) limits, you can set the user-level FD limits.

You need to edit /etc/security/limits.conf file

And for user YOUR_USER, add these lines:

YOUR_USER soft nofile 4096
YOUR_USER hard nofile 10240

to set the soft and hard limits for user YOUR_USER.
Save and close the file.

To see the hard and soft limits for user YOUR_USER:

su - YOUR_USER

ulimit -Hn
ulimit -Sn
Stefan Steiger
  • 78,642
  • 66
  • 377
  • 442
  • But i am already seeing values of 1024 for Hn and Sn. So should not fail at 974th creation right. – badri Jul 21 '14 at 10:32
  • @badri: Not unless the system has another 50 fds open. – Stefan Steiger Jul 21 '14 at 10:33
  • @badri: Are you sure you didn't forget anywhere to close any stream/socket/connection ? – Stefan Steiger Jul 21 '14 at 10:37
  • Hi Quandary - i need them all in parallel. will be closing them later. but as you said, i will try increasing it to 4096 and see if it helps first. – badri Jul 21 '14 at 10:46
  • changing limits in /etc/security/limits.conf helped. thanks ! – badri Jul 21 '14 at 16:47
  • quandary - i was mistaken. sorry. i thought changing limits helped, but cat /proc/3688/limits | grep files still shows Max open files 1024 1024 files Even sysctl and sysctl -p has not helped. Tried setting huge values ulimit -Sn to 4096/8192 but still no effect. – badri Jul 25 '14 at 11:16
  • http://stackoverflow.com/questions/3734932/max-open-files-for-working-process sent by wojtek also suggests that i have to use setrlimit within process. http://pubs.opengroup.org/onlinepubs/009695399/functions/getrlimit.html. Does it mean i have to invoke it from within the process (i.e., C file itself). Surely there should be some linux command to set these limits? – badri Jul 25 '14 at 11:19