45

I'm opening lots of files with fopen() in VC++ but after a while it fails.

Is there a limit to the number of files you can open simultaneously?

Jimmy J
  • 1,953
  • 1
  • 14
  • 20
  • I suppose you could start a new instance of yourself (the process) after 2048 files... – Adam May 17 '10 at 22:33
  • ...or have a separate executable that does your file operations that takes a textfile location as a parameter (each line having operation and file location) and works with, say, 500 files at a time. – Adam May 17 '10 at 22:36
  • Resources are always limited (whatever the computer and the operating system). So of course there is a limit. The better question is how to query or increase that limit. – Basile Starynkevitch Oct 25 '17 at 09:18

7 Answers7

64

The C run-time libraries have a 512 limit for the number of files that can be open at any one time. Attempting to open more than the maximum number of file descriptors or file streams causes program failure. Use _setmaxstdio to change this number. More information about this can be read here

Also you may have to check if your version of windows supports the upper limit you are trying to set with _setmaxstdio. For more information on _setmaxstdio check here

Information on the subject corresponding to VS 2015 can be found here

R Sahu
  • 204,454
  • 14
  • 159
  • 270
Stack Programmer
  • 3,386
  • 1
  • 20
  • 12
  • Interesting. Does this limit apply to the executable? Thread? Something else? – Les May 15 '09 at 19:18
  • 9
    Also: It's not possible to _setmaxstdio beyond 2048 open files, at least with the current Windows CRT. If you need more open files than that, you will have to use CreateFile (http://msdn.microsoft.com/en-us/library/aa363858.aspx) and related Win32 functions. However, a design which requires that many open files is probably wrong... – ephemient May 15 '09 at 19:26
  • Since the POSIX-ish _open() function is also a CRT function, this also applies to using that as well as the stdio function fopen(). – Dana Robinson Apr 14 '12 at 23:27
  • 12
    I would like to highlight the last sentence by @ephemient: **a design which requires that many open files is probably wrong** – kevin Aug 20 '14 at 18:50
  • As of 2022, the default limit for C runtime libraries is still 512, but it can now be increased up to 8192 using `_setmaxstdio`. See Microsoft's [documentation on the `_setmaxstdio` function.](https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/setmaxstdio?view=msvc-170#remarks) – Nick Muise Sep 23 '22 at 13:50
15

In case anyone else is unclear as to what the limit applies to, I believe that this is a per-process limit and not system-wide.

I just wrote a small test program to open files until it fails. It gets to 2045 files before failing (2045 + STDIN + STDOUT + STDERROR = 2048), then I left that open and ran another copy.

The second copy showed the same behaviour, meaning I had at least 4096 files open at once.

Drarok
  • 3,612
  • 2
  • 31
  • 48
12

If you use the standard C/C++ POSIX libraries with Windows, the answer is "yes", there is a limit.

However, interestingly, the limit is imposed by the kind of C/C++ libraries that you are using.

I came across with the following JIRA thread (http://bugs.mysql.com/bug.php?id=24509) from MySQL. They were dealing with the same problem about the number of open files.

However, Paul DuBois explained that the problem could effectively be eliminated in Windows by using ...

Win32 API calls (CreateFile(), WriteFile(), and so forth) and the default maximum number of open files has been increased to 16384. The maximum can be increased further by using the --max-open-files=N option at server startup.

Naturally, you could have a theoretically large number of open files by using a technique similar to database connections-pooling, but that would have a severe effect on performance.

Indeed, opening a large number of files could be bad design. However, some situations call require it. For example, if you are building a database server that will be used by thousands of users or applications, the server will necessarily have to open a large number of files (or suffer a performance hit by using file-descriptor pooling techniques).

Chris Seymour
  • 83,387
  • 30
  • 160
  • 202
luiscolorado
  • 1,525
  • 2
  • 16
  • 23
7

Yes there are limits depending the access level you use when openning the files. You can use _getmaxstdio to find the limits and _setmaxstdio to change the limits.

NorthCat
  • 9,643
  • 16
  • 47
  • 50
Malcolm Post
  • 515
  • 5
  • 9
4

I don't know where Paulo got that number from.. In windows NT based operating systems the number of file handles opened per process is basically limited by physical memory - it's certainly in the hundreds of thousands.

Larry Osterman
  • 16,086
  • 32
  • 60
2

Came across the same problem, but using Embarcadero C++-Builder of RAD Studio 10.2. The C-runtime of that thing doesn't seem to provide _getmaxstdio or _setmaxstdio, but some macros and their default limit is much lower than what is said here for other runtimes:

stdio.h:

/* Number of files that can be open simultaneously
*/
#if defined(__STDC__)
#define FOPEN_MAX (_NFILE_)
#else
#define FOPEN_MAX (_NFILE_)
#define SYS_OPEN  (_NFILE_)
#endif

_nfile.h:

#if defined(_WIN64)
#define _NFILE_ 512
#else
#define _NFILE_ 50
#endif
Thorsten Schöning
  • 3,501
  • 2
  • 25
  • 46
0

Yes, there is a limit.

The limit depends on the OS, and memory available.

In the old D.O.S. the limit was 255 simultaneuously opened files.

In Windows XP, the limit is higher (I believe it's 2,048 as stated by MSDN).

Paulo Santos
  • 11,285
  • 4
  • 39
  • 65
  • 1
    This sounds like a limitation of the C runtime, and not of the OS. I assume that if you use the win32 api directly you can open more files. – CodesInChaos Aug 08 '11 at 09:44