30

I writed simple code for test, how much files may be open in python script:

for i in xrange(2000):
    fp = open('files/file_%d' % i, 'w')
    fp.write(str(i))
    fp.close()

fps = []
for x in xrange(2000):
    h = open('files/file_%d' % x, 'r')
    print h.read()
    fps.append(h)

and I get a exception

IOError: [Errno 24] Too many open files: 'files/file_509'
jerboa
  • 1,371
  • 2
  • 12
  • 18
  • on fedora 14 and python 2.7 I got this error on 1021 – Ruggero Turra Jul 21 '11 at 10:44
  • 6
    @wiso, +stdin, stdout, stderr makes 1024 - where have I seen that number before? – John La Rooy Jul 21 '11 at 10:55
  • 1
    You should use `try..finally` or `with` to safely close a file. To your problem: maybe you want to tell us what you are going to do because want your code does makes no sense at all for me. – schlamar Jul 21 '11 at 11:33
  • @gnibber: `ulimit -n` gives me 1024. I think you need to count also `/usr/lib64/python2.7/atexit.py` and `/home/xyz/.pystartup` as opened files. – Ruggero Turra Jul 21 '11 at 17:45
  • Were you to try this in another language on the same operating system, you'll quicky discover that this is not a Python limitation. – johnsyweb May 07 '15 at 08:49

7 Answers7

40

The number of open files is limited by the operating system. On linux you can type

ulimit -n

to see what the limit is. If you are root, you can type

ulimit -n 2048

now your program will run ok (as root) since you have lifted the limit to 2048 open files

John La Rooy
  • 295,403
  • 53
  • 369
  • 502
  • 14
    +1, i will add if you want to check the limit using a python code use `import resource; print resource.getrlimit(resource.RLIMIT_NOFILE)`. – mouad Jul 22 '11 at 11:05
  • 2
    Apparently, you don't need to be root to change it (I just tried!) – chuse Oct 08 '15 at 09:43
15

I see same behavior on Windows when running your code. The limit exists from C runtime. You can use win32file to change the limit value:

import win32file

print win32file._getmaxstdio()

The above shall give you 512, which explains the failure at #509 (+stdin, stderr, stdout as others have already stated)

Execute the following and your code shall run fine:

win32file._setmaxstdio(2048)

Note that 2048 is the hard limit, though (hard limit of the underlying C Stdio). As a result, executing the _setmaxstdio with a value greater than 2048 fails for me.

Dhara
  • 6,587
  • 2
  • 31
  • 46
Punit S
  • 3,079
  • 1
  • 21
  • 26
  • 3
    to make this work, I first had to do this install: `pip install pywin32` – B K Aug 28 '20 at 19:25
  • 1
    Also note that as of 2022, the C runtime libraries now support an upper limit of 8192 open files, not 2048. See [Microsoft's documentation on the `_setmaxstdio function.`](https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/setmaxstdio?view=msvc-170#remarks) – Nick Muise Sep 23 '22 at 13:54
10

To check change the limit of open file handles on Linux, you can use the Python module resource:

import resource

# the soft limit imposed by the current configuration
# the hard limit imposed by the operating system.
soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE)
print 'Soft limit is ', soft 

# For the following line to run, you need to execute the Python script as root.
resource.setrlimit(resource.RLIMIT_NOFILE, (3000, hard))

On Windows, I do as Punit S suggested:

import platform

if platform.system() == 'Windows':
    import win32file
    win32file._setmaxstdio(2048)
Franck Dernoncourt
  • 77,520
  • 72
  • 342
  • 501
7

Most likely because the operating system has a limit for the number of files that an application can have open.

Guffa
  • 687,336
  • 108
  • 737
  • 1,005
2

On Windows one can get or set the limit with the built-in ctypes library:

import ctypes
print("Before: {}".format(ctypes.windll.msvcrt._getmaxstdio()))
ctypes.windll.msvcrt._setmaxstdio(2048)
print("After: {}".format(ctypes.windll.msvcrt._getmaxstdio()))
ababak
  • 1,685
  • 1
  • 11
  • 23
0

Since this is not a Python problem, do this:

for x in xrange(2000):
    with open('files/file_%d' % x, 'r') as h:
        print h.read()

The following is a very bad idea.

fps.append(h)
S.Lott
  • 384,516
  • 81
  • 508
  • 779
  • 1
    Whether this is a bad idea or not depends on what you're doing. Even Guido himself uses this strategy of holding onto open files in a particular sorting algorithm: http://neopythonic.blogspot.com/2008/10/sorting-million-32-bit-integers-in-2mb.html – EML Feb 18 '15 at 05:32
  • 1
    @S.Lott: Can you explain why you think fps.append(h) would be a bad idea (and which alternative would you suggest)? – iCanLearn Sep 14 '15 at 19:54
-1

The append is needed so the garbage collector does not clean up and close the files

andrew pate
  • 3,833
  • 36
  • 28