16

I am using treq (https://github.com/twisted/treq) to query some other api from my web service. Today when I was doing stress testing of my own services, It shows an error

twisted.internet.error.DNSLookupError: DNS lookup failed: address 'api.abc.com' not found: [Errno 24] Too many open files.

But the problem is, my entire code I didn't open any file. I suspect it could be caused by the api I query goes down or blocked me (the api.abc.com) since my stress testing could be like a ddos to that end point. Still, in that case shouldn't that be something like refuse connection? I don't know why it will have that Too many open files error. Or is that caused by creating too much thread query?

JLTChiu
  • 983
  • 3
  • 12
  • 28
  • 1
    "files" really means file descriptors and includes things like sockets, so if you're opening a lot of connections you could run into this problem still – Eric Renouf Sep 16 '16 at 18:14

2 Answers2

34

"Files" include network sockets, which are a type of file on Unix-based systems. The maximum number of open files is configurable with ulimit -n, and the limit is inherited by child processes:

# Check current limit
$ ulimit -n
256

# Raise limit to 2048
# Only affects processes started from this shell
$ ulimit -n 2048

$ ulimit -n
2048

It is not surprising to run out of file handles and have to raise the limit. But if the limit is already high, you may be leaking file handles (not closing them quickly enough). In garbage-collected languages like Python, the finalizer does not always close files fast enough, which is why you should be careful to use with blocks or other systems to close the files as soon as you are done with them.

Dietrich Epp
  • 205,541
  • 37
  • 345
  • 415
8

I wanted to build on @Dietrich Epp answer. Setting ulimit -n will change the current limit for that terminal only. If you would like to change this limit so it exists across all terminal sessions (such as on EC2), you need to edit:

vim /etc/security/limits.conf

and add soft and hard limits for the number of open descriptors per user. As an example, you can paste this snippet in the file above:

*         hard    nofile      500000
*         soft    nofile      500000
root      hard    nofile      500000
root      soft    nofile      500000

This will set the limit to 500000 with every new terminal session. After editing, sign out and then back in, (or reboot if you are able and that's preferable). Afterwards, you can run ulimit -n to confirm that it's been set properly.

dave4jr
  • 1,131
  • 13
  • 18
  • 1
    why the default value being so low? Is there any drawback to set a high value to ulimit? – wsdzbm Mar 11 '21 at 15:55
  • 1
    @ddzzbbwwmm Honestly it just really depends on what your application is. Some applications rely on a high number of open connections in order to accomplish their task while the majority of tasks are fine under the default. The drawback to setting a high value to ulimit would be it covering up issues that could have been caught otherwise, such as a rogue processes that are using a lot of resources. It acts as a type of safeguard. As long as you understand your application and can monitor things, it can be a useful setting. – dave4jr Mar 12 '21 at 08:44