21

I created an endpoint on my flask which generates a spreadsheet from a database query (remote db) and then sends it as a download in the browser. Flask doesn't throw any errors. Uwsgi doesn't complain.

But when I check nginx's error.log I see a lot of

2014/12/10 05:06:24 [error] 14084#0: *239436 upstream prematurely closed connection while reading response header from upstream, client: 34.34.34.34, server: me.com, request: "GET /download/export.csv HTTP/1.1", upstream: "uwsgi://0.0.0.0:5002", host: "me.com", referrer: "https://me.com/download/export.csv"

I deploy the uwsgi like

uwsgi --socket 0.0.0.0:5002 --buffer-size=32768 --module server --callab app

my nginx config:

server {
     listen 80;
     merge_slashes off;
     server_name me.com www.me.cpm;

     location / { try_files $uri @app; }
       location @app {
          include uwsgi_params;
          uwsgi_pass 0.0.0.0:5002;
          uwsgi_buffer_size 32k;
          uwsgi_buffers 8 32k;
          uwsgi_busy_buffers_size 32k;
     }

}

server {
      listen 443;
      merge_slashes off;
      server_name me.com www.me.com;

    location / { try_files $uri @app; }
       location @app {
          include uwsgi_params;
          uwsgi_pass 0.0.0.0:5002;
          uwsgi_buffer_size 32k;
          uwsgi_buffers 8 32k;
          uwsgi_busy_buffers_size 32k;
       }
}

Is this an nginx or uwsgi issue, or both?

user299709
  • 4,922
  • 10
  • 56
  • 88
  • I once got the same error, it turned to be that I forgot "include uwsgi_params". or check your `uwsgi_params` file under nginx confs – est Mar 02 '18 at 07:55

11 Answers11

18

As mentioned by @mahdix, the error can be caused by Nginx sending a request with the uwsgi protocol while uwsgi is listening on that port for http packets.

When in the Nginx config you have something like:

upstream org_app {
    server              10.0.9.79:9597;
}
location / {
    include         uwsgi_params;
    uwsgi_pass      org_app;
}

Nginx will use the uwsgi protocol. But if in uwsgi.ini you have something like (or its equivalent in the command line):

http-socket=:9597

uwsgi will speak http, and the error mentioned in the question appears. See native HTTP support.

A possible fix is to have instead:

socket=:9597

In which case Nginx and uwsgi will communicate with each other using the uwsgi protocol over a TCP connection.

Side note: if Nginx and uwsgi are in the same node, a Unix socket will be faster than TCP. See using Unix sockets instead of ports.

Ivan Ogai
  • 1,406
  • 15
  • 9
10

Change nginx.conf to include

sendfile        on;
client_max_body_size 20M;
keepalive_timeout  0;

See self answer uwsgi upstart on amazon linux for full example

Community
  • 1
  • 1
tourdownunder
  • 1,779
  • 4
  • 22
  • 34
5

In my case, problem was nginx was sending a request with uwsgi protocol while uwsgi was listening on that port for http packets. So either I had to change the way nginx connects to uwsgi or change the uwsgi to listen using uwsgi protocol.

Mahdi
  • 1,871
  • 2
  • 24
  • 41
  • 5
    Could you please add the details of your solution to your answer? I believe you either need to use `uwsgi_pass` and then set the `socket` value to the port in the uwsgi .ini file OR `proxy_pass` in nginx along with setting `http-socket` to the port in uwsgi? – wuliwong Oct 01 '18 at 16:56
3

I had the same sporadic errors in Elastic Beanstalk single-container Docker WSGI app deployment. On EC2 instance of the environment upstream configuration looks like:

upstream docker {
    server 172.17.0.3:8080;
    keepalive 256;
}

With this default upstream simple load test like:

siege -b -c 16 -t 60S -T 'application/json' 'http://host/foo POST {"foo": "bar"}'

...on the EC2 led to availability of ~70%. The rest were 502 errors caused by upstream prematurely closed connection while reading response header from upstream.

The solution was to either remove keepalive setting from the upstream configuration, or which is easier and more reasonable, is to enable HTTP keep-alive at uWSGI's side as well, with --http-keepalive (available since 1.9).

saaj
  • 23,253
  • 3
  • 104
  • 105
2

Replace uwsgi_pass 0.0.0.0:5002; with uwsgi_pass 127.0.0.1:5002; or better use unix sockets.

jwalker
  • 1,999
  • 16
  • 27
1

It seems many causes can stand behind this error message. I know you are using uwsgi_pass, but for those having the problem on long requests when using proxy_pass, setting http-timeout on uWSGI may help (it is not harakiri setting).

krzychu
  • 3,577
  • 2
  • 27
  • 29
1

There are many potential causes and solutions for this problem. In my case, the back-end code was taking too long to run. Modifying these variables fixed it for me.

Nginx: proxy_connect_timeout, proxy_send_timeout, proxy_read_timeout, fastcgi_send_timeout, fastcgi_read_timeout, keepalive_timeout, uwsgi_read_timeout, uwsgi_send_timeout, uwsgi_socket_keepalive.

uWSGI: limit-post.

Raphael
  • 441
  • 1
  • 6
  • 12
0

I fixed this issue by passing socket-timeout = 65 (uwsgi.ini file) or --socket-timeout=65 (uwsgi command line) option in uwsgi. We have to check with different value depends on the web traffic. This value socket-timeout = 65 in uwsgi.ini file worked in my case.

Sathish
  • 51
  • 2
0

I fixed this by reverting to pip3 install uwsgi.

I was trying out the setup with Ubuntu and Amazon Linux side by side. I initially used a virtual environment and did pip3 install uwsgi both systems work fine. Later, I did continue the setup with virtual env turned off. On Ubuntu I install using pip3 install uwsgi and on Amazon Linux yum install uwsgi -y. That was the source of the problem for me.

Ubuntu works fine, but not the Amazon Linux

The fix,

yum remove uwsgi and pip3 install uwsgi restart and it works fine.

tutug
  • 421
  • 4
  • 5
0

This issue can also be caused by a mismatch between timeout values. I had this issue when nginx had a keepalive_timeout of 75s, while the upstream server's value was a few seconds.

This caused the upstream server to close the connection when its timeout was reached, and nginx logged Connection reset by peer errors.

When having such abrupt "connection closed" errors, please check the upstream timeout values are higher than nginx' values (see Raphael's answer for a good list to check)

Eino Gourdin
  • 4,169
  • 3
  • 39
  • 67
0
[uwsgi]
chdir = /var/www/html/api
http-timeout = 600
http-socket-protocol = http/1.1
#http-socket-max-request = 69832  ==> comment this max-request
buffer-size = 32768
memory-report = 65536
master = true
processes = 2
threads = 2
http-socket = 0.0.0.0:8000
#virtualenv = ../venv
#socket = /var/run/uwsgi.sock
module = api.wsgi:application
chmod-socket = 664

During using nginx proxy... comment or remove that line from UWSGI (http-socket-max-request)