3

I deployed a simple Node.js Express app to Elastic Beanstalk.

Using the loadtest npm package, I hit up the EC2 instance directly (bypassing the ELB) with 150 requests per second, and that's when things started getting hairy.

loadtest -c 10 --rps 150 http://{EC2-IP}/

The server would reset the connection immediately (ECONNRESET), rending the app unreachable. It refused to support more than X concurrent connections.

Elad Nava
  • 7,746
  • 2
  • 41
  • 61

2 Answers2

14

Upon diving into the EC2 server's logs, I found the following log message spammed across the file:

1024 worker_connections are not enough

I then understood that I need to tweak the worker_connections declaration in /etc/nginx/nginx.conf, as well as the increase OS file descriptor limit, since each connection requires 1 file descriptor.

You can check the OS file descriptor limit by running:

ulimit -n

However, since the instance is managed by Elastic Beanstalk, Amazon asks that we instead make our edits to /tmp/deployment/config/#etc#nginx#nginx.conf, which contains the same content as /etc/nginx/nginx.conf, but Amazon knows to copy it over to /etc/nginx/nginx.conf when the deploy completes, as well as tell nginx to use the new configuration.

I created an .ebextensions file that increases the OS file descriptor limit, as well as the worker_connections declaration. Increasing this limit will allow us to support sudden spikes of traffic instead of returning ECONNRESET.

Include it with your application when you deploy, in the following path relative to the root directory of your deployed directory:

.ebextensions/nginx.config

files:
  "/etc/security/limits.conf":
    content: |
      *           soft    nofile          6144
      *           hard    nofile          6144
container_commands:
    01-worker-connections:
        command: "/bin/sed -i 's/worker_connections  1024/worker_connections  6144/g' /tmp/deployment/config/#etc#nginx#nginx.conf"
Elad Nava
  • 7,746
  • 2
  • 41
  • 61
  • How exactly are you consuming 1024 connections while (apparently) testing with a concurrency of only 10? It seems like there's another problem here that increasing the limit might be masking, or a problem with `loadtest` itself. – Michael - sqlbot May 21 '16 at 23:29
  • Not sure, but running `loadtest` with --rps 100 instead of --rps 150 was enough to avoid the worker_connections error / connection resets. Also, it's possible that I had other load on the server from API clients when running this test, so they may have contributed to reaching that 1024 worker_connections limit. – Elad Nava May 22 '16 at 18:31
  • 1
    Thank you very much ! You just saved my ass – Konstantin Zolotarev Dec 22 '16 at 10:48
  • This approach failed for me on Dec 5, 2017 with the following error: `Command failed on instance. Return code: 2 Output: /bin/sed: can't read /tmp/deployment/config/#etc#nginx#nginx.conf: No such file or directory. container_command 01-worker-connections in .ebextensions/nginx.config failed. ` It seems AWS might work differently now, see [this SE answer](https://stackoverflow.com/questions/24860426/nginx-config-file-overwritten-during-elastic-beanstalk-deployment#answer-45155825) and [these AWS instructions](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-se-nginx.html). – ConvexMartian Dec 05 '17 at 15:04
  • 1
    @ConvexMartian interesting, they must have changed something in the recent EB Platform Versions. Thanks for the heads up and workaround. – Elad Nava Dec 07 '17 at 05:12
  • This still works on EC2 Linux instances. Funny enough, I was looking for hours for this exact solution and after I solved it through trial and error I came across this post, that exactly addresses the issue ‍♂️ – Manuel Apr 21 '20 at 00:39
-1

modify directly the original copy of the configuration file: thus put (/etc/nginx/nginx.conf) instead of (/tmp/deployment/config/#etc#nginx#nginx.conf).