6

The default limit for the max open files on Mac OS X is 256 (ulimit -n) and my application needs about 400 file handlers.

I tried to change the limit with setrlimit() but even if the function executes correctly, i'm still limited to 256.

Here is the test program I use:

#include <stdio.h>
#include <sys/resource.h>

main()
{
  struct rlimit rlp;

  FILE *fp[10000];
  int i;

  getrlimit(RLIMIT_NOFILE, &rlp);
  printf("before %d %d\n", rlp.rlim_cur, rlp.rlim_max);

  rlp.rlim_cur = 10000;
  setrlimit(RLIMIT_NOFILE, &rlp);

  getrlimit(RLIMIT_NOFILE, &rlp);
  printf("after %d %d\n", rlp.rlim_cur, rlp.rlim_max);

  for(i=0;i<10000;i++) {
    fp[i] = fopen("a.out", "r");
    if(fp[i]==0) { printf("failed after %d\n", i); break; }
  }

}

and the output is:

before 256 -1
after 10000 -1
failed after 253

I cannot ask the people who use my application to poke inside a /etc file or something. I need the application to do it by itself.

acemtp
  • 2,971
  • 6
  • 34
  • 43
  • Why do you need so many files open simultaneously? – sbooth Jul 02 '10 at 15:19
  • Not that it should matter, but are you testing this on the server edition or the desktop edition of OSX? I can imagine that the apple folks decided to limit how many files a desktop app could open since opening many is usually a server oriented task... – Evan Teran Jul 02 '10 at 22:12

6 Answers6

5

rlp.rlim_cur = 10000;

Two things.

1st. LOL. Apparently you have found a bug in the Mac OS X' stdio. If I fix your program up/add error handling/etc and also replace fopen() with open() syscall, I can easily reach the limit of 10000 (which is 240 fds below my 10.6.3' OPEN_MAX limit 10240)

2nd. RTFM: man setrlimit. Case of max open files has to be treated specifically regarding OPEN_MAX.

Dummy00001
  • 16,630
  • 5
  • 41
  • 63
  • 1
    Thx for the answer. Are you serious when you say it could be a bug in stdio on mac os x or it's a joke? so the only solution is to use syscall instead of standard C function? – acemtp Jul 02 '10 at 21:43
  • @acemtp: limitation is probably a better word. The standard only requires libc to guarantee that you can open 8 files at a time (including `stdin`/`stdout`/`stderr`!). It would be an unusual limitation but no unheard of. – Evan Teran Jul 02 '10 at 22:11
  • 1
    @acetemp, @evan: well stdio on Linux has no problems coping with whatever I throw at it. and I personally would qualify that as a bug. 8 files at once?? stdio, stdin, stderr - 3 are busy already. Application log file + trace file - leaves only 3 free... Silly and a bug, if you ask me. – Dummy00001 Jul 03 '10 at 08:41
5

etresoft found the answer on the apple discussion board:

The whole problem here is your printf() function. When you call printf(), you are initializing internal data structures to a certain size. Then, you call setrlimit() to try to adjust those sizes. That function fails because you have already been using those internal structures with your printf(). If you use two rlimit structures (one for before and one for after), and don't print them until after calling setrlimit, you will find that you can change the limits of the current process even in a command line program. The maximum value is 10240.

acemtp
  • 2,971
  • 6
  • 34
  • 43
3

For some reason (perhaps binary compatibility), you have to define _DARWIN_UNLIMITED_STREAMS before including <stdio.h>:

#define _DARWIN_UNLIMITED_STREAMS

#include <stdio.h>
#include <sys/resource.h>

main()
{
  struct rlimit rlp;

  FILE *fp[10000];
  int i;

  getrlimit(RLIMIT_NOFILE, &rlp);
  printf("before %d %d\n", rlp.rlim_cur, rlp.rlim_max);

  rlp.rlim_cur = 10000;
  setrlimit(RLIMIT_NOFILE, &rlp);

  getrlimit(RLIMIT_NOFILE, &rlp);
  printf("after %d %d\n", rlp.rlim_cur, rlp.rlim_max);

  for(i=0;i<10000;i++) {
    fp[i] = fopen("a.out", "r");
    if(fp[i]==0) { printf("failed after %d\n", i); break; }
  }

}

prints

before 256 -1
after 10000 -1
failed after 9997

This feature appears to have been introduced in Mac OS X 10.6.

2

This may be a hard limitation of your libc. Some versions of solaris have a similar limitation because they store the fd as an unsigned char in the FILE struct. If this is the case for your libc as well, you may not be able to do what you want.

As far as I know, things like setrlimit only effect how many file you can open with open (fopen is almost certainly implemented in terms on open). So if this limitation is on the libc level, you will need an alternate solution.

Of course you could always not use fopen and instead use the open system call available on just about every variant of unix.

The downside is that you have to use write and read instead of fwrite and fread, which don't do things like buffering (that's all done in your libc, not by the OS itself). So it could end up be a performance bottleneck.

Can you describe the scenario that requires 400 files open ** simultaneously**? I am not saying that there is no case where that is needed. But, if you describe your use case more clearly, then perhaps we can recommend a better solution.

Evan Teran
  • 87,561
  • 32
  • 179
  • 238
  • libc limit: Yes. See my comment. Changing the program to use open() instead of fopen() fixes the problem. On Linux btw works like a charm - after making the obvious fix of replacing the 10000 with rlp.rlim_max (but on Mac OS X even that is different as cap of OPEN_MAX has to be checked too). Scenario where you need 400 fds ... I maintain specialized networks server which also backs data to disk. Seeing 2K sockets and open files in use isn't uncommon. – Dummy00001 Jul 02 '10 at 20:44
  • @Dummy00001: ok, that is certainly **a** scenario, but having acemtp describe exactly what he is trying to do could still help :-P. But it looks like we have found the nature of the problem. – Evan Teran Jul 02 '10 at 21:10
0

I know that's sound a silly question, but you really need 400 files opened at the same time? By the way, are you running this code as root are you?

Paolo Perego
  • 393
  • 3
  • 9
  • yes I need 400 files opened at the same time and no I'm not as root. as man says, since i don't change the max limit, but just the cur limit, i don't have to be root. – acemtp Jul 02 '10 at 15:20
  • 3
    But wouldn't max limit limit cur limit? – nategoose Jul 02 '10 at 15:36
-1

Mac OS doesn't allow us to easily change the limit as in many of the unix based operating system. We have to create two files

/Library/LaunchDaemons/limit.maxfiles.plist /Library/LaunchDaemons/limit.maxproc.plist describing the max proc and max file limit. The ownership of the file need to be changed to 'root:wheel'

This alone doesn't solve the problem, by default latest version of mac OSX uses 'csrutil', we need to disable it. To disable it we need to reboot our mac in recovery mode and from there disable csrutil using terminal.

Now we can easily change the max open file handle limit easily from terminal itself (even in normal boot mode).

This method is explained in detail in the following link. http://blog.dekstroza.io/ulimit-shenanigans-on-osx-el-capitan/

works for OSX-elcapitan and OSX-Seirra.

ARMV
  • 194
  • 5