3

I have the latest Nginx running with Passenger, SQLite and Rails 3.1. Somehow, when I have Passenger running for a while, I start getting "502 bad gateway" errors when visiting my website.

Here is a snippet from my Nginx error log:

2011/06/27 08:55:33 [error] 20331#0: *11270 upstream prematurely closed connection while reading response header from upstream, client: xxx.xxx.xx.x, server: www.example.com, request: "GET / HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "example.com"
2011/06/27 08:55:47 [info] 20331#0: *11273 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: xxx.xxx.xx.x, server: www.example.com, request: "GET / HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "example.com"

Here is my passenger-status --show=backtraces output:

Thread 'Client thread 7':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 10':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 11':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 12':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 13':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 14':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 15':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 16':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 17':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 18':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 19':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 20':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 21':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 22':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 23':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'Client thread 24':
 in 'Passenger::FileDescriptor Client::acceptConnection()' (HelperAgent.cpp:160)
 in 'void Client::threadMain()' (HelperAgent.cpp:603)

Thread 'MessageServer thread':
 in 'void Passenger::MessageServer::mainLoop()' (MessageServer.h:537)

Thread 'MessageServer client thread 35':
 in 'virtual bool Passenger::BacktracesServer::processMessage(Passenger::MessageServer::CommonClientContext&, boost::shared_ptr<Passenger::MessageServer::ClientContext>&, const std::vector<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&)' (BacktracesServer.h:47)
 in 'void Passenger::MessageServer::clientHandlingMainLoop(Passenger::FileDescriptor&)' (MessageServer.h:470)

This is what my passenger-memory-stats shows:

---------- Nginx processes ----------
PID    PPID   VMSize   Private  Name
-------------------------------------
16291  1      35.4 MB  0.1 MB   nginx: master process /home/apps/.nginx/sbin/nginx
16292  16291  36.0 MB  0.8 MB   nginx: worker process
16293  16291  35.8 MB  0.5 MB   nginx: worker process
16294  16291  35.8 MB  0.5 MB   nginx: worker process
16295  16291  35.8 MB  0.5 MB   nginx: worker process
### Processes: 5
### Total private dirty RSS: 2.46 MB


----- Passenger processes ------
PID    VMSize    Private   Name
--------------------------------
16251  87.0 MB   0.3 MB    PassengerWatchdog
16254  100.4 MB  1.3 MB    PassengerHelperAgent
16256  41.6 MB   5.7 MB    Passenger spawn server
16259  134.8 MB  0.8 MB    PassengerLoggingAgent
18390  770.4 MB  17.1 MB   Passenger ApplicationSpawner: /home/apps/manager/current
18415  853.3 MB  147.7 MB  Rack: /home/apps/manager/current
18424  790.5 MB  57.2 MB   Rack: /home/apps/manager/current
18431  774.7 MB  18.7 MB   Rack: /home/apps/manager/current
### Processes: 8
### Total private dirty RSS: 248.85 MB

It seems there is an issue with my the communication between Passenger and Nginx?

Also, looking at the Rails logs, it is clear that the request never reaches Rails at all, as there are no log entries for visits that get the 502 error. So my initial thought of something being wrong with any Rack middleware should not be possible.

the Tin Man
  • 158,662
  • 42
  • 215
  • 303
JeanMertz
  • 2,250
  • 2
  • 21
  • 26
  • Had it not been possibly somewhat related to memory management, I would have voted for a move to http://servefault.com or just flagged as "too specific" – conny Jun 08 '11 at 14:17

5 Answers5

2

The "V" in VM is for Virtual. See also answers on other SO questions, e.g. Virtual Memory Usage from Java under Linux, too much memory used.

That top 147 MB does not hint of anything unusual whatsoever. Your 502 errors mean something else is wrong with the worker processes from Passenger's point of view. You should check your Rails & Nginx log files for clues, and perhaps passenger-status --show=backtraces.

Community
  • 1
  • 1
conny
  • 9,973
  • 6
  • 38
  • 47
  • Hi @conny, sorry for the late reply, I've been extremely busy. I've updated my question following your suggestions. The nginx error log definitely shows something fishy going on between nginx and passenger. Do you have any more feedback about this? If needed, I can flag this to be moved to serverfault.com. Thank you. – JeanMertz Jun 27 '11 at 09:12
  • @bodacious I never solved the issue. I was considering completely creating the server from a clean ubuntu image again, but instead I jumped on the Heroku bandwagon, I really don't want to deal with these issues when I barely have enough time to develop my app and provide support for my customers. – JeanMertz Jul 23 '11 at 18:57
  • Before you rebuild your server, try setting `config.threadsafe!` – bodacious Jul 25 '11 at 11:11
  • I've opened a ticket for this: https://github.com/rtomayko/rack-cache/issues/23#issuecomment-1566974 – bodacious Jul 25 '11 at 11:11
1

I just meet such deadly "502 Bad Gateway error" reported by nginx, web stack is Ubuntu 12.04 + Rails 3.2.9 + Passenger 3.0.18 + nginx 1.2.4, it spent me 2 hours to found the root cause:

My rails application no need database support, so I just remove the gem 'sqlite3' in the Gemfile, it works fine in development mode, but will lead 502 Bad Gateway in production mode.

So after add back gem 'sqlite3' in Gemfile, such 502 Bad Gateway error disappear....

Eric Guo
  • 1,755
  • 17
  • 25
1

Try setting passenger_spawn_method conservative -- apparently there are issues with Passenger default forking settings and Rails 3.1

Dan
  • 10,531
  • 2
  • 36
  • 55
Billy
  • 11
  • 1
0

It was the same for me in Rails 4, but I have added a "SECRETKEYBASE" in /confirg/secrets.yml

production:
secretkeybase: # add yours here
Gediminas Šukys
  • 7,101
  • 7
  • 46
  • 59
  • Hi -- can you share what secret_key_base had to do with this problem? We are suddenly getting 502 Bad Gateways, and it also seems to be related to a change we made -- we updated to specify secret_key_base in secrets.yml instead of the older Rails convention of config/secret_token.rb. But we can't figure out why this would have an effect or what to do to fix the issue... – Jacob Mar 24 '16 at 22:04
0

I had the same problem and in my case it helped to increase the passenger_max_pool_size setting in the Nginx configuration file.

Maybe you can also take a look on the following postings which also helped me finding this solution:

the Tin Man
  • 158,662
  • 42
  • 215
  • 303
disco crazy
  • 31,313
  • 12
  • 80
  • 83