2

I wanna host my rails 6.0.3 (ruby 2.7.1) app on AWS Beanstalk using platform Ruby 2.7 AL2 version 3.1.1. I spent hours to solve the following issues and finally, I got one I stucked. When the app is being started I got the following error:

/var/log/puma/puma.log

[10222] Early termination of worker
[10258] + Gemfile in context: /var/app/current/Gemfile
[10258] Early termination of worker
[31408] - Gracefully shutting down workers...
=== puma startup: 2020-09-25 13:33:02 +0000 ===
=== puma startup: 2020-09-25 13:33:02 +0000 ===
[10501] + Gemfile in context: /var/app/current/Gemfile
[10501] Early termination of worker
[10504] + Gemfile in context: /var/app/current/Gemfile
[10504] Early termination of worker

On the other hand in /var/log/web.stdout.log it seems to look fine...

Sep 25 13:33:02 ip-172-31-43-76 web: [10418] Puma starting in cluster mode...
Sep 25 13:33:02 ip-172-31-43-76 web: [10418] * Version 4.3.5 (ruby 2.7.1-p83), codename: Mysterious Traveller
Sep 25 13:33:02 ip-172-31-43-76 web: [10418] * Min threads: 8, max threads: 32
Sep 25 13:33:02 ip-172-31-43-76 web: [10418] * Environment: staging
Sep 25 13:33:02 ip-172-31-43-76 web: [10418] * Process workers: 1
Sep 25 13:33:02 ip-172-31-43-76 web: [10418] * Phased restart available
Sep 25 13:33:02 ip-172-31-43-76 web: [10418] * Listening on unix:///var/run/puma/my_app.sock
Sep 25 13:33:02 ip-172-31-43-76 web: [10418] Use Ctrl-C to stop

I use the same puma version as pointed in official doc 4.3.5

My config/puma.rb look like:

max_threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
min_threads_count = ENV.fetch("RAILS_MIN_THREADS") { max_threads_count }
threads min_threads_count, max_threads_count

# Specifies the `port` that Puma will listen on to receive requests; default is 3000.
#
port        ENV.fetch("PORT") { 3000 }

# Specifies the `environment` that Puma will run in.
#
environment ENV.fetch("RAILS_ENV") { "development" }

# Specifies the `pidfile` that Puma will use.
pidfile ENV.fetch("PIDFILE") { "tmp/pids/server.pid" }

# Specifies the number of `workers` to boot in clustered mode.
# Workers are forked web server processes. If using threads and workers together,
# the concurrency of the application would be max `threads` * `workers.`
# Workers do not work on JRuby or Windows (both of which do not support
# processes).
#
workers ENV.fetch("WEB_CONCURRENCY") { 2 } # <------ uncomment this line

# Use the `preload_app!` method when specifying a `workers` number.
# This directive tells Puma to first boot the application and load code
# before forking the application. This takes advantage of Copy On Write
# process behavior so workers use less memory.
#
preload_app! # <------ uncomment this line

# Allow Puma to be restarted by the `Rails restart` command.
plugin :tmp_restart

How to fix it and run properly?

mike927
  • 682
  • 1
  • 8
  • 25
  • Have you checked this? https://stackoverflow.com/questions/59861277/puma-stuck-with-message-early-termination-of-worker-on-rails-6-api-only-projec or this https://stackoverflow.com/questions/61308587/what-does-early-termination-of-worker-puma-log-mean-and-why-is-it-happening – Sandip Subedi Oct 02 '20 at 02:48
  • yeah, as I mentioned I have got exactly the same PUMA version as AWS says. – mike927 Oct 02 '20 at 07:25
  • temporarily I decided to abound Beanstalk till I find some working solution – mike927 Oct 02 '20 at 07:26
  • We encountered this same issue and it turned out that there was a conflict between 2 middleware gems we were using: NewRelic and Sqreen. Would you be able to provide your Gemfile and also do some testing on disabling any middleware gems you may be using? – Shawn Deprey Oct 05 '20 at 14:37
  • (Sqreen team here) Shawn, is it this one? https://github.com/newrelic/newrelic-ruby-agent/issues/461 – Lloeki Oct 05 '20 at 16:03

1 Answers1

1

Please check the following:

  • run production locally. This can show you additional errors (for example Zeitwerk errors)
  • check permissions under vendor dir, where the gem files go. One of our dependencies didn't have read permission for others and was causing error. You can change this by adding script to .ebextensions
  • Check also this link: AWS elastic beanstalk is not getting the environment variables

In our case, we hit jackpot. It was all of the above ...

wenzel
  • 13
  • 3