2

I'm wondering if http protocol has a limit on body size. For example, if I want to send a file from server of 1Tb, is this possible? This is hypothetical situation with a file like that, just an example

My question is about restrictions imposed by the protocol, not the server. And I'm talking about the response body, not the request.

Max Koretskyi
  • 101,079
  • 60
  • 333
  • 488
  • Of course it is. It would take a while, though. Just increase the timeout – Alex Oct 12 '15 at 15:52
  • It ultimately depends on the server implementation - for example, IIS7 has a hard limit of 30MB by default. You *probably* wouldn't want to upload a 1TB through the HTTP protocol. You may find better luck using [rsync or FTP](http://stackoverflow.com/questions/9707900/what-is-the-fastest-way-to-transfer-files-over-a-network-ftp-http-rsync-etc). Your absolute best bet would definitely be to use some kind of streaming approach, though – Dan Oct 12 '15 at 15:53
  • @ARedHerring "IIS7 has a hard limit of 30MB" That's not true. The default value for the server is 30Mb but it can be changed in the config files (on [server](https://support.microsoft.com/en-us/kb/942074) or [site](https://msdn.microsoft.com/en-us/library/ms689462(v=vs.90).aspx) level) – Andreas Oct 12 '15 at 15:59
  • @Andreas You seem to have missed the words "by default" after the part you're quoting me on. – Dan Oct 12 '15 at 16:00
  • 1
    There is no limit defined in the protocol and it is very unlikely that any server has an issue delivering such a bigger payload. However I would expect most clients to have issues with that. – arkascha Oct 12 '15 at 16:01
  • @arkascha, thanks. Do you know any approchs where such big files are delivered with several http responses? – Max Koretskyi Oct 12 '15 at 16:04
  • 1
    Certainly one could split a big payload into smaller chunks, but that is questionable. Especially since that does _not_ allow to use standard clients for the task, since the client would have to recombine the chunks into the original payload. So it would require special code. I am not aware that such approaches are implemented in mainstream. As others mentioned: it might make more sense to look for another, more suitable protocol. – arkascha Oct 12 '15 at 16:07
  • @arkascha, thanks a lot, you've answer my question perfectly. Put your comments as an answer, and I'll accept it. – Max Koretskyi Oct 12 '15 at 16:08
  • Your wish is my command :-) Have fun! – arkascha Oct 12 '15 at 16:09

2 Answers2

2

There is no limit defined in the protocol and it is very unlikely that any server has an issue delivering such a bigger payload. However I would expect most clients to have issues with that.

Certainly one could split a big payload into smaller chunks to use multiple http requests, but that is questionable. Especially since that does not allow to use standard clients for the task, since the client would have to recombine the chunks into the original payload. So it would require special code. I am not aware that such approaches are implemented in mainstream. As others mentioned: it might make more sense to look for another, more suitable protocol.

arkascha
  • 41,620
  • 7
  • 58
  • 90
0

There isn't a limit. However, in Apache you can set a limit by means of the LimitRequestBody directive which is set to zero (unlimited) by default.

Andres
  • 10,561
  • 4
  • 45
  • 63