2

Is there a way I can cause errno 23 (ENFILE File table overflow) on purpose?

I am doing socket programming and I want to check if creating too many sockets can cause this error. As I understand - created socked is treated as a file descriptor, so it should count towards system limit of opened files.

Here is a part of my python script, which creates the sockets

def enfile():

nofile_soft_limit = 10000
nofile_hard_limit = 20000

resource.setrlimit(resource.RLIMIT_NOFILE, (nofile_soft_limit,nofile_hard_limit))

sock_table = []
for i in range(0, 10000):
    print "Creating socket number {0}".format(i)
    try:
        temp = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.SOL_UDP)
    except socket.error as msg:
        print 'Failed to create socket. Error code: ' + str(msg[0]) + ' , Error message : ' + msg[1]
        print msg[0]
    sock_table.append(temp)

With setrlimit() I change the processes limit of open files to a high value, so that I don't get Errno24 (EMFILE).

I have tried two approaches: 1) Per-user limit by changing /etc/security/limits.conf

root      hard    nofile      5000
root      soft    nofile      5000

(logged in with a new session after that)

2) System-wide limit by changing /etc/sysctl.conf

fs.file-max = 5000
and then run sysctl -p to apply the changes.

My script easily creates 10k sockets despite per-user and system-wide limits, and it ends with errno 24 (EMFILE).

Is it possible to achieve my goal? I am using two OS'es - CentOS 6.7 and Fedora 20. Maybe there are some other settings to make in these system?

Thanks!

karlos88
  • 21
  • 2

2 Answers2

2

ENFILE will only happen if the system-wide limit is reached, whereas the settings you've tried so far are per-process, so only related to EMFILE. For more details including which system-wide settings to change to trigger ENFILE, see this answer: https://stackoverflow.com/a/24862823/4323 as well as https://serverfault.com/questions/122679/how-do-ulimit-n-and-proc-sys-fs-file-max-differ

Community
  • 1
  • 1
John Zwinck
  • 239,568
  • 38
  • 324
  • 436
  • Isn't fs.file-max from approach 2) a system-wide setting? It does say so in RHEL documentation: [link](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Tuning_and_Optimizing_Red_Hat_Enterprise_Linux_for_Oracle_9i_and_10g_Databases/chap-Oracle_9i_and_10g_Tuning_Guide-Setting_File_Handles.html) and it's also described as system-wide here: [link](http://stackoverflow.com/questions/24862733/difference-between-linux-errno-23-and-linux-errno-24/24862823#24862823) But the system seems to not case about the limit anyway. – karlos88 Sep 25 '15 at 08:24
1

You should look for an answer in kernel sources.

Socket call returns ENFILE in __sock_create() when sock_alloc() returns NULL. This can happen only if it can't allocate a new inode.

You can use:

df -i

to check for your inodes usage.

Unfortunately the inode limit can't be changed dynamically. Generally the total number of inodes and the space reserved for these inodes is set when the filesystem is first created.

Solution?

Modern filesystems like Brtfs and XFS use dynamic inodes to avoid inode limits - if you have one of them it could be impossible to do that.

If you have LVM disk, decreasing the size of the volume could help.

But if you want to be sure of simulating a situation from your post you should create googol of files, 1 byte each and you will run out of inodes long before you run out of disk. Then you can try to create socket.

If I am wrong, please correct me.

WojtekCh
  • 90
  • 3