1

In short, if I am sending an HTTP POST with a large-ish (20-30mb) payload and the connection drops halfway through sending the request to the server, can I recover the 10mb+ that was sent before the connection dropped?

In my testing of PHP on NGINX, if the connection drops during the upload, my PHP never seems to start. I have ignore_user_abort(1) at the top of the script, but that only seems to be relevant once a complete request has been received.

Is there a configuration setting somewhere that will allow me to see all of the request that was received, even if it wasn't received in full?

I'm sending these files mostly over intermittent connections, so I'd like to send as much as I can per request, and then just ask the server where to continue from. As things stand at the moment I have to send the files in pieces, and reducing the size of the pieces if there are errors, or increasing the size if there haven't been any errors for a while. That's very slow and wasteful of bandwidth.

=======

I should clarify that it's not so much about uploading a large file in one go that I'm after as much as if the connection breaks, can I pick up from where I left off? In all my testing, if the complete post is not received, the whole request is junked and PHP not notified, so I have to start from scratch.

I'll have to run some tests, but are you saying that if I used chunked transfer encoding for the request, that PHP would get all the chunks received before disconnection? It's worth a try, and certainly better that making multiple smaller posts on the offchance that the connection will break.

Thanks for the suggestion.

SimonR
  • 13
  • 4
  • possible duplicate of [Upload very large files(>5GB)](http://stackoverflow.com/questions/13122218/upload-very-large-files5gb) – Jay Blanchard Jul 06 '15 at 17:02
  • Many chunking libraries handle situations like these so that you do not have to. I highly recommend using one, since it will allow bigger files as well as resumable uploads. – Rob Foley Jul 06 '15 at 17:02

1 Answers1

1

Never process big file uploads through scripting backend (Ruby, PHP), there is a built-in direct upload functionality called client_body_in_file_only, see my very deep overview on it here: https://coderwall.com/p/swgfvw/nginx-direct-file-upload-without-passing-them-through-backend

The only limit it doesn't work with multipart form data, but only via AJAX or direct POST from mobile or server to server.

Anatoly
  • 15,298
  • 5
  • 53
  • 77
  • It's very good to know that this exists, but unfortunately doesn't help me at the moment. In my testing I started transferring a large file, gave it time to get under way and then cut the connection. No files were left in the upload/temp folder. When I didn't break the connection, files uploaded fine (except only one of my custom headers seem to pass through, but that's probably a simple mistake on my part). – SimonR Jul 07 '15 at 18:50
  • The file is basically BODY of HTTP request you dump on disk. The request either finished or interrupted which cause the "issue", but it is an expected behaviour. Try chucked requests but I'm not sure if it helps. – Anatoly Jul 07 '15 at 19:53
  • The investigation I've been able to do since has not been conclusive in being able to read a partial upload that was my original question, but I've marked this item as the answer because it helps a lot in moving things forward, and is certainly the way to go if I wanted to send very large files in one go. Many thanks @mikhailov. – SimonR Jul 10 '15 at 10:23