3

Question: If I have an untrusted, user-supplied URL to a file, how do I protect myself against server-side request forgery when I download that file? Are there tools in the .NET Framework (4.8) base class library that help me, or is there some canonical reference implementation for this use case?


Details: Our web application (an online product database) allows users to upload product images. We have the requirement that users should be allowed to supply the URL to a (self-hosted) image instead of uploading an image.

So far, so good. However, sometimes our web application will have to fetch the image from the (external, user-supplied) URL to do something with it (for example, to include the image in a PDF product data sheet).

This exposes my web application to the risk of Server-Side Request Forgery. The OWASP Cheat Sheet documents this use case as "Case 2" and suggests mitigations such as validating URLs and backlisting known internal IP addresses.

This means that I cannot use the built-in methods for downloading files such as WebClient or HttpWebRequest, since those classes take care of DNS resolution, and I need to validate IP addresses after DNS resolution but before performing the HTTP request. I could perform DNS resolution myself and then create a web request with the (validated) IP address and a custom Host header, but that might mess up TLS certificate checking.

To make a long story short, I feel like I am reinventing the wheel here, for something that sounds like a common-enough use case. (I am surely not the first web developer who has to fetch files from user-supplied URLs.) The .NET Framework has tools for protection against CSRF built-in, so I'm wondering if there are similar tools available for SSRF that I just haven't found.


Note: There are similar question (such as this one) in the tag, but, contrary to them, my goal is not to "get rid of a warning", but to actually protect my system against SSRF.

Heinzi
  • 167,459
  • 57
  • 363
  • 519
  • 1
    Note: I'm not sure whether SO or softwareengineering.stackexchange.com is the better place for this question. I have chosen the former, since it is a concrete implementation problem in a specific technology, but if you think that the question is more suitable for the latter, feel free to suggest migration. – Heinzi Dec 16 '20 at 16:43

1 Answers1

1
  1. Confirm the requirement with the business stakeholders. It's very possible they don't care how the file is obtained-- they just want the user to be able to specify a URL rather than a local file. If that is the case, your application can use Javascript to download the file from the browser then upload it from there. This avoids the server-side problem completely.

  2. If you have to do it server-side, ask for budget for a a dedicated server. Locate this in your DMZ (between the perimeter firewall and the firewall that isolates your web servers from the rest of your network). Use this server to run a program that downloads the URLs and puts the data where your main application can get it, e.g. a database.

  3. If you have to host it on your existing hardware, use a dedicated process, running in a dedicated application pool with a dedicated user identity. The proper location for this service is on your web server (not application or database servers).

  4. Audit and monitor the security logs for the dedicated user.

  5. Revoke any permission to private keys or local resources such as the filesystem.

  6. Validate the protocol (http or https only).

  7. To the extent possible, validate the IP address, and maintain a black list.

  8. Validate the domain name to ensure it is a public URL and not something within your network. If possible, use a proxy server with public DNS.

John Wu
  • 50,556
  • 8
  • 44
  • 80
  • 1
    I think 2 is pretty much the best approach - I'd simply move that process out to Azure/AWS/Google cloud stateless function rather than true dedicated server... making protection from rogue url to be completely someone else's problem. – Alexei Levenkov Dec 16 '20 at 18:58
  • Thanks, these are good points. I thought about 1 as well, but, unfortunately, [CORS exists to prevent exactly that](https://stackoverflow.com/q/39592752/87698). – Heinzi Dec 16 '20 at 21:09