5

maybe someone know what to do. I'm trying to upload files greater than 3Gb. No problems, if I upload files up to 2Gb with next configs:

Nginx:

client_max_body_size 5g;
client_body_in_file_only clean;
client_body_buffer_size 256K;

proxy_read_timeout 1200;
keepalive_timeout 30;
uwsgi_read_timeout 30m;

UWSGI options:

harakiri 60
harakiri 1800
socket-timeout 1800
chunked-input-timeout 1800
http-timeout 1800

When i upload big (almost 4Gb) file, it uploads ~ 2-2.2Gb and stops with error:

[uwsgi-body-read] Timeout reading 4096 bytes. Content-Length: 3763798089 consumed: 2147479552 left: 1616318537

Which params i should use?

Arthur Sult
  • 676
  • 7
  • 10
  • Try to avoid backend processing while upload huge size files on a regular basis: https://coderwall.com/p/swgfvw/nginx-direct-file-upload-without-passing-them-through-backend – Anatoly Oct 29 '15 at 19:48
  • article is good, but i cannot use that method, beecouse i need to upload to server, for example a photo. Then i have to pass this photo to uwsgi for converting (eg, from gif to png). I need make manipulations with uploaded files, this is problem – Arthur Sult Nov 11 '15 at 07:28
  • To pass the request further you can use proxy_pass directive that calls once file is uploaded to file system. The temporary file name is accessible via Nginx variable. – Anatoly Nov 12 '15 at 19:27
  • ok, translation: I tried this method with "client_body_in_file_only", but without success. First, my nginx saves uploaded entire file to e.g. /tmp/0000042 (it's 3.6 Gb). Then my backend (uwsgi) begins to copy this file to /tmp/0000043, and copying as process longs 60 seconds. Machine cannot copy all file entirely in 60 sec, only 2.1 Gb. Finally in browser i get "504 Gateway Time-out" error – Arthur Sult Nov 18 '15 at 08:37
  • did you found any solution for that? – silviomoreto Jul 26 '16 at 16:19
  • @silvio, yes, i can succesfully upload large movies now. [link](http://classny.ru/en/wall/add/) - here is my upload page(russian lang is main). If you open console and watch network requests, you'll see that I a) use xhr b) don't upload file entirely, but by small chunks. It was long time ago, I dont remember details, but main idea is that you should split your file to chunks at client side and send them one by one. – Arthur Sult Jul 27 '16 at 07:31
  • See also: https://stackoverflow.com/questions/35725438/sendfile-failed-32-broken-pipe-while-sending-request-to-upstream-nginx-502 – andrewdotn May 04 '21 at 22:55

2 Answers2

1

What ended up solving my issue was setting:

uwsgi.ini

http-timeout = 1200
socket-timeout = 1200

nginx_site.conf

proxy_read_timeout 1200;
proxy_send_timeout 1200;
client_header_timeout 1200;
client_body_timeout 1200;
uwsgi_read_timeout 20m;

After stumbling upon a similar issue with large files (>1Gb) I collected further info from github issue and stackoverflow thread and several more. What ended up happening was python / uwsgi taking too long to process the large file, and nginx stopped listening to uwsgi leading to a 504 error. So increasing the timeout time for http and socket communication ended up resolving it.

philmaweb
  • 514
  • 7
  • 13
0

I have similar problems with nginx and uWSGI with the same limit at about 2-2.2GB file size. nginx properly accepts the POST request and when it forwards the request to uWSGI, uWSGI just stops processing the upload after about 18 seconds (Zero CPU, lsof says that the file size in the uWSGI temp dir does not increase anymore). Increasing any timeout values does not help.

What solved the issue for me was to disable buffering in nginx (proxy_request_buffering off;) and setting up buffering in uWSGI with 2MB buffer size:

post-buffering         =  2097152
post-buffering-bufsize =  2097152
Paul
  • 66
  • 4