14

I have a multipart file upload in a form with a php backend. I've set max_execution_time and max_input_time in php.ini to 180 and confirmed on the file upload that these values are set and set TimeOut 180 in Apache. I've also set

RewriteRule .* - [E=noabort:1]
RewriteRule .* - [E=noconntimeout:1]

When I upload a 250MB file on a fast connection it works fine. When I'm on a slower connection or a network link conditioner to artificially slow it down, the same file times out and on Chrome gives me net::ERR_CONNECTION_RESET after 1 minute (and 5 seconds) reliably. I've also tried other browsers with the same outcome, just different error messages.

There is no indication to an error in any log and I've tried both on http and https.

What would cause the upload connection to be reset after 1 minute?

EDIT

I've now also tried to have a simple upload form that bypasses any framework I'm using, still timeouts at 1 minute.

I've also just made a sleep script that timeouts after 2 and a half minutes, and that works, page takes around 2.5 minutes to load so I can't see how it's browser or header related.

I've also used a server with more RAM to ensure it's not related to that. I've tested on 3 different servers with different specs but all from the same CentOS 7 base.

I've now also upgraded to PHP 7.2 and updated the relevant fields again with no change in the problem.

EDIT 2 The tech stack for this isolated instance is

  • Apache 2.4.6
  • PHP 5.6 / 7.2 (tried both), has OPCache
  • Redis 3.2.6 for session information and key / value storage (ElastiCache)
  • PostgreSQL 10.2 (RDS)

Everything else in my tech stack has been removed from this test area to try and isolate the problem. EFS is on the system but in my most isolated test it's just using EBS.

EDIT 3 Here some logs from the chrome network debugger:

{"params":{"net_error":-101,"os_error":32},"phase":0,"source":    {"id":274043,"type":8},"time":"3332701830","type":69},
{"params":    {"error_lib":33,"error_reason":101,"file":"../../net/socket/socket_bio_adapter.cc","line":216,"net_error":-101,"ssl_error":1},"phase":0,"source":        {"id":274043,"type":8},"time":"3332701830","type":56},
{"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":159},
{"phase":1,"source":    {"id":274038,"type":1},"time":"3332701830","type":164},
{"phase":1,"source":    {"id":274038,"type":1},"time":"3332701830","type":287},
{"params":    {"error_lib":33,"error_reason":101,"file":"../../net/socket/socket_bio_adapter.cc","line":113,"net_error":-101,"ssl_error":1},"phase":0,"source":    {"id":274043,"type":8},"time":"3332701830","type":55},
{"params":{"net_error":-101},"phase":2,"source":    {"id":274038,"type":1},"time":"3332701830","type":287},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":164},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":97},
{"phase":1,"source":{"id":274038,"type":1},"time":"3332701830","type":105},
{"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":105},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":38},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":38},
{"phase":2,"source":{"id":274043,"type":8},"time":"3332701830","type":34},
{"params":{"net_error":-101},"phase":2,"source":{"id":274038,"type":1},"time":"3332701830","type":2},
Rudiger
  • 6,749
  • 13
  • 51
  • 102
  • 1
    https://serverfault.com/questions/356508/tracking-down-connection-reset-errors-in-linux <- this may help : – treyBake Dec 14 '18 at 12:03
  • Had a look, no errors on the NIC, they are EC2s and on a variety of hardware so can't see how it's related to hardware issues. I'll try and capture it in Wireshark but I don't think it will help in my situation as it's 100% reliably dropped as I believe config related. – Rudiger Dec 14 '18 at 20:24
  • 1
    I'm sure you did and this is just a stupid question, but sometimes it's the small details we overlook... you did restart Apache after making all the config changes, correct? Additionally, when you get the error, does it happen while the data is still being uploaded or after the upload finishes and the processing is being done? – Javier Larroulet Dec 17 '18 at 12:53
  • @JavierLarroulet yes did restart apache. The upload gets to like 10% (changes of course to file size) and then fails. UI updates with the issue but it does give an HTTP code of 200 (I guess because it is succeeding and the headers are set before it fails). – Rudiger Dec 17 '18 at 19:52
  • @Rudiger please have look on this https://forums.aws.amazon.com/thread.jspa?threadID=46345 may be this could help. – Aabir Hussain Dec 18 '18 at 11:25
  • 2
    increase upload_max_filesize, max_input_time, and post_max_size an memory_limit in your php.ini. – UnP Dec 18 '18 at 14:36
  • @UnP I've updated all of those, the 250 MB file upload works fine if it can be done within a minute, if it can't even a 5MB one fails. – Rudiger Dec 18 '18 at 22:39
  • @AabirHussain Thanks for the link, I've looked through and I don't think it relates. I'm going directly to the EC2 and it reliably fails at 1 minute regardless of file size. – Rudiger Dec 18 '18 at 22:49
  • On a slow connection, the server will always try to suck more data when uploading that big file, so at a time the server will close automatically and refreshes the connection or it begins a fresh. That why internet downloaders are mostly used on slow connection - such that when the connection is re-started - it begins from where it stopped. –  Dec 18 '18 at 22:51
  • i dont know whether it can be done when uploading a file such that the file starts from where it stopped and continues uploading. i think thats what you need to configure out. –  Dec 18 '18 at 22:53
  • It's just a web frontend while there are some resume functions in HTML 5 it's not well enough supported to solve our problem. – Rudiger Dec 18 '18 at 23:22
  • 1
    @ Rudiger what happens if you simply comment out `RewriteRule .* - [E=noabort:1] RewriteRule .* - [E=noconntimeout:1]` and restart apache and try again – UnP Dec 19 '18 at 10:53
  • @Rudigner, I guess you are not using Litespeed ...but if you would be using Litespeed, you can set a longer timeout in **WebAdmin CP > Configuration > Server > Tuning > Connection Timeout (secs)**. – UnP Dec 19 '18 at 10:55
  • 1
    please provide a tech stack list for that server. – Rubinum Dec 19 '18 at 11:31
  • Check the log of Apache, on centos usually is here: /var/log/httpd-error.log – Leandro Ferrero Dec 19 '18 at 21:11
  • 1
    If the error on the apache log is related to fast-cgi try to change this value: FcgidIOTimeout to a higher number. You can find that setting on /etc/apache2/mods-available/fcgid.conf – Leandro Ferrero Dec 19 '18 at 21:32
  • @UnP No change unfortunately. Also I don't use Litespeed, Apache 2.4.6 – Rudiger Dec 19 '18 at 23:07
  • @Rubinum I've added my tech stack to the question – Rudiger Dec 19 '18 at 23:40
  • @LeandroFerrero No logs in either http log or the log defined by the apache config. Looking at the phpinfo() I don't have FastCGI installed, just CGI/1.1 if that affects anything. – Rudiger Dec 19 '18 at 23:42
  • 1
    Have you tried getting some network logs from chrome? https://dev.chromium.org/for-testers/providing-network-details – Dave Dec 20 '18 at 06:55
  • @Dave thanks, never used that tool before, added logging information to question, I feel that captures it but not sure it captures whats wrong. – Rudiger Dec 20 '18 at 09:43
  • 1
    Did you use PostgreSQL somewhere in your upload php code ? Could be relative to a PostgreSQL request timeout issue which raise ERROR in your PHP ?! If yes, try 'SET statement_timeout TO 0;' – A. STEFANI Dec 21 '18 at 11:36
  • 1
    I think, you have a timeout that is killing the php script, with FCGI servers, usually is the value FcgidIOTimeout. mod_cgi should have a timeout... but i'm guessing now... Verify the value of [CGIDScriptTimeout](https://httpd.apache.org/docs/2.4/mod/mod_cgid.html) if it's something close to 60 seconds, probably is the cause of the issue. – Leandro Ferrero Dec 21 '18 at 16:59
  • @LeandroFerrero mods not loaded, thanks though. – Rudiger Dec 22 '18 at 05:48
  • @ASTEFANI nah, I've isolated the php to just do a simple upload, no redis or postgres involved. – Rudiger Dec 22 '18 at 05:49
  • In my case it was memory exhaustion - even when I set `ini_set('memory_limit', '-1');` in my PHP script, PHP error logs would show memory limit exceeded (it was still using 128M limit). I added `php_value memory_limit -1` in `.htaccess` file and everything is fine now. – Jay Dadhania Jan 08 '21 at 17:38
  • This problem is caused by Kaspersky Internet Security in my case. I've to disable `network ports monitoring` and `Inject script into web traffic to interact with web pages` to make uploading large files working again. – Ben Mack Feb 15 '21 at 10:00

7 Answers7

9

Original source here

ERR_CONNECTION_RESET usually means that the connection to the server has ceased without sending any response to the client. This means that the entire PHP process has died without being able to shut down properly.

This is usually not caused by something like an exceeded memory_limit. It could be some sort of Segmentation Fault or something like that. If you have access to error logs, check them. Otherwise, you might get support from your hosting company.

I would recommend you to try some of these things:

  1. Try cleaning the browser's cache. If you have already visited the page, it is possible for the cache to contain information that doesn’t match the current version of the website and so blocks the connection setup, making the ERR_CONNECTION_RESET message appear.

  2. Add the following to your settings:

    memory_limit = 1024M

    max_input_vars = 2000

    upload_max_filesize = 300M

    post_max_size = 300M

    max_execution_time = 990

  3. Try setting the following input in your form:

  4. In your processing script, increase the session timeout:

    set_time_limit(200);

  5. You might need to tune up the SSL buffer size in your apache config file.

    SSLRenegBufferSize 10486000

The name and location of the conf file is different depending on distributions.

In Debian you find the conf file in /etc/apache2/sites-available/default-ssl.conf

  1. A few times it is mod_security module which prevents post of large data approximately 171 KB. Try adding/modifying the following in mod_security.conf

    SecRequestBodyNoFilesLimit 10486000 SecRequestBodyInMemoryLimit 10486000

I hope something might work out!

Svenmarim
  • 3,633
  • 5
  • 24
  • 56
Vikas Yadav
  • 3,094
  • 2
  • 20
  • 21
  • Unfortunately I've tried everything in your answer, it even fails with a 20mb file if the connection is slow enough and it always fails at 1 minute and 5 seconds. It could be a segmentation fault but absolutely no logs are provided. Its a VM in AWS so while they can try and help it's probably outside their scope to help – Rudiger Dec 20 '18 at 09:17
  • looking at logs will be very helpful, we might be able to look into the issue. – Vikas Yadav Dec 20 '18 at 09:22
  • try cleaning the cache as I mentioned in the updated answer. Do also check for firewalls or other antivirus programs running aside the application. – Vikas Yadav Dec 20 '18 at 09:27
  • I've tried browsers that I've never used before to test to see if a better error message comes from them so confident it's not cache. I did think clamav or fail2ban could be causing it but both have been turned off and the issue is still present. – Rudiger Dec 20 '18 at 09:41
  • Also did set_time_limit(200); and no difference. – Rudiger Dec 20 '18 at 10:09
  • Tried point 5 and 6 with no luck. I did try to upload without SSL and ran into the same issue so I'm confident it's not to do with SSL. – Rudiger Dec 20 '18 at 11:02
  • Nah, I'm going through and trying everything again. Still no joy. The thing that's perplexing is if I lower it to 30 seconds it still times out after 1 minute, either something is overriding it or it's just not setting it properly but I've changed PHP versions and ensured it's set both at time of code run and through `phpinfo()`. I'm at a loss what's causing it. – Rudiger Dec 22 '18 at 00:41
8

I went through a similar problem, in my case it was related to mod_reqtimeout by adding:

RequestReadTimeout header=20-40, MinRate=500 body=20, MinRate=500

to httpd.conf did the trick! You can check the documentation here.

Hope it helps!

Raul
  • 81
  • 1
  • 3
3

Incase anybody else runs into this - there is also a problem with this relating to PHP-FPM. If you dont set "ProxyTimeout" in your httpd.conf - PHP-FPM uses a default timeout of one minute. It took me several hours to figure out the problem as I initially was thinking of all the normal settings like everyone else.

James F
  • 169
  • 10
1

I had the same problem. I used the resumable file upload method where if the internet is disconnected and reconnects back then the upload resumes from the same progress.

Check out the library https://packagist.org/packages/pion/laravel-chunk-upload

  1. Installation

composer require pion/laravel-chunk-upload

  1. Add service provider

\Pion\Laravel\ChunkUpload\Providers\ChunkUploadServiceProvider::class

  1. Publish the config

php artisan vendor:publish --provider="Pion\Laravel\ChunkUpload\Providers\ChunkUploadServiceProvider"

murtuza hussain
  • 458
  • 1
  • 7
  • 18
  • Thanks, but I'm using a whole bunch of frameworks that makes this integration hard. I've I have to do development I'll just do the upload straight to S3. – Rudiger Dec 22 '18 at 05:51
  • 1
    https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/s3-multipart-upload.html. AWS has PHP SDK which will help you out in this – murtuza hussain Dec 22 '18 at 13:39
1

In my opinion it maybe relative to one of them:

About apache config (/etc/httpd2/conf ou /etc/apache2/conf):

Timeout 300
max_execution_time = 300

About php config ('php.ini'):

upload_max_filesize = 2000M
post_max_size = 2000M
max_input_time = 300
memory_limit = 3092M
max_execution_time = 300

About PostgreSQL config (execute this request):

SET statement_timeout TO 0;

About proxy, (or apache mod_proxy), it maybe also be due to proxy timeout configuration

A. STEFANI
  • 6,707
  • 1
  • 23
  • 48
0

in case anyone has the same issue, the problem I encountered is that the http request has to go through proxy sever and waf, small files upload is ok, but with large files the tcp connection automatically closed, how to validate:

simply change your hosts setting point the domain to the web server ip address (or you may use firefox with no-proxy if there is no waf), if your problem gone then it's the caused by the proxy or the waf in between your web server and the browser

LIU YUE
  • 1,593
  • 11
  • 19
0

Connection-Reset occurs when php process dies without proper error message.

Changing oracle client version from 19 to 12c and then appropriately configuring in php.ini solved the connection reset issue for our team.