1

I am providing a facility to my web users that they can upload their profile image from a url rather than uploading it from computer. I see here a mischievous user can provide url of a huge file or may be some url which is tailed to /dev/random, very unlikely but can happen. Is there a way I can determine the size of file before fetching it completely to my server?

Shiv Deepak
  • 3,122
  • 5
  • 34
  • 49

2 Answers2

1

Depending on what you are doing to grab that remote file, there are different things you can do.

  • While file_get_contents('http://foobar.com') is quite convenient, it gives you the least amount of control. I don't see how you could do a HEAD request to grab the Content-Length header up front.
  • fsockopen() will make you cry when dealing with HTTPS.
  • curl is, well, curl. It's just as ugly as powerful. There are other options, like the HTTP Pecl (basically wrapping curl) as well.

    1. check if the resource provides a Content-Length header. Do a HEAD request for this. Some servers/services don't handle HEAD requests. You'd then make a GET request and abort the transfer after you got the response headers.
    2. If (1) yielded a result, check if it's greater than you limit. If so, abort.
    3. Use curl to fetch the resource. Have a look at CURLOPT_READFUNCTION to be able to abort the download if the volume exceeds your limit. You should also check this, if (1) yielded a result, as this result might've been spoofed.

In the very worst case you'll have made 1 HEAD and 1 GET request to acquire the Content-Length, as well as another GET request to download $yourLimit bytes.

rodneyrehm
  • 13,442
  • 1
  • 40
  • 56
0

Check for the Content-Length header in the response from the server.

Burhan Khalid
  • 169,990
  • 18
  • 245
  • 284