0

One of the most prevalent attacks today is cross-site scripting (XSS), which is more of an attack on your application's users than on the application itself, but it exploits server-side application vulnerabilities all the same. The results can be devastating and can lead to information disclosure, identity spoofing, and elevation of privilege

Reading this document I see many suggestions about Sanitizing/Validation input on server side before manage them.

Well, for what I know, using Stored Procedures (for the DB side) and .NET (to manage e get the responses) I'm quite sure.

Can you show to me a scenario where both Stored Procedures and .NET could fail (without Sanitizing/Validation) and where I can be "unsafe"?

As I say, I mean "security", not persistence/accuracy of data! There I agree on Sanitizing input...

markzzz
  • 47,390
  • 120
  • 299
  • 507

3 Answers3

1

I know my answer references java, but I felt it will provide at least some context (another reason response is big for comment) why we need server/client side input sanitation also .

From the document you referenced:

String Fields

To validate string fields, such as names, addresses, tax identification numbers, and so on, use regular expressions to do the following:

    Constrain the acceptable range of input characters.
    Apply formatting rules. For example, pattern-based fields, such as tax identification numbers, ZIP codes, or postal codes, require specific patterns of input characters.
    Check lengths.

If you haven't constrained/validated length of this string/type either on client side/ server side, sophisticated attacker may disrupt your system by providing long input of strings. Indeed this is an issue in Java (not sure it applies for .NET/IIS, assuming it is because .NET uses hashcode for equality, I may be wrong too).

Here is interesting we had couple of days ago here at SO.

If you can constraint String size as limit characters. You can safely avoid these issues.

Community
  • 1
  • 1
kosa
  • 65,990
  • 13
  • 130
  • 167
0

If you're passing your data to standard .NET Framework objects then these should handle their own sanitisation. You should think of the data that you need to sanitise as all the data that .NET does not know how to deal with. i.e. data that the .NET framework does not know what it will be used for.

For example, the .NET framework will not know that a string value is to be used as a social security number. If you're passing this to a 3rd party system, either directly or stored in a database and then passed at a later time, you're going to want to sanitise and validate the input to check that the social security number is in the expected format. Failure to do this could make your system vulnerable to attack because of security vulnerabilities in the 3rd party system. e.g. a social security number was entered containing certain characters may make the 3rd party system crash and in turn this could create a denial of service with your system as it is trying to communicate with a service that is down. This scenario is just one of many possibilities, it doesn't necessarily have to result in a DOS attack, but ultimately you're going to want to validate and sanitise input to guard against the unknown.

As you specifically mention XSS, this is a vulnerability that the standard .NET web controls are actually vulnerable to, unless the default option of Request Validation is active (see http://www.asp.net/whitepapers/request-validation). This is because the .NET web controls do not automatically HTML encode characters when setting the Text property. So if you're running your site without request validation on, you should make sure that everything output is properly encoded (this is a form of output sanitisation). Given the choice I would develop with the MVC framework rather than web forms, as it makes it easy to use HTML output sanitisation (e.g. using <%: %> brackets will HTML encode output automatically). This enables your application to correct handle malicious (and non malicious) <script> tags entered in input without the need for validation, as the output is properly sanitised so therefore if your application is protected in this way request validation can be disabled, which is my preferred option as then you're not manipulating user input unnecessarily. For example, if SO sanitised input and removed <script> tags it would be impossible for me to include them in this message.

Another type of sanitisation that is common with web apps is correctly formatting strings that will be injected into JavaScript (e.g. single and double quote characters). In a nutshell you're guarding against a user inserting malicious JavaScript via an input, which will be displayed to another user and executed if not properly sanitised, and when executed the request will be running as the new user which may have a higher security level in the application and all sorts of damage could occur.

SilverlightFox
  • 32,436
  • 11
  • 76
  • 145
  • 1 - "If you're passing your data to standard .NET Framework objects then these should handle their own sanitisation" which kind of handle I should call? Or you mean "it call its own sanitation tools automatically? – markzzz Feb 15 '12 at 13:45
  • 2 - Yeah, I mean "sure" thinking about "Request Validation" active. In fact would be strange put it to false : its a nice feature, I don't see why disable it :) – markzzz Feb 15 '12 at 13:46
  • 3 - To be honest I don't know MVC on .NET, always used Web forms. But I also know that on .cs, .NET encode automatically fields, also when the strings are already encoded (which I hate; follow this post http://stackoverflow.com/questions/9035736/why-i-get-a-different-result-with-the-same-htmldecode-function) – markzzz Feb 15 '12 at 13:48
  • 4 - I think .NET manage both single and double quote...escaping them correctly. Didnt find any issues with this at the moment! (I mean, if on a form I insert the string `Hello my name "is" Marco`, and in the response I make `console.log("<%=string%>")` it won't be broken! So the escaping is automatically managed by .NET) – markzzz Feb 15 '12 at 13:50
  • @markzzz Why would one ever enable "Request Validation"? Seems like a misguided feature to me. It's in the wrong place, and doesn't know enough about context to reliably protect you, and it causes false positives. – CodesInChaos Feb 15 '12 at 14:07
  • Oh! I never think about this! Can you show to me "false positives"? Just an example... to improve my understanding (or misunderstanding) about this feature... – markzzz Feb 15 '12 at 14:19
  • IIRC request validation will trigger with certain combinations of less than and greater than characters, whether or not these are script or HTML tags. – SilverlightFox Feb 15 '12 at 17:07
  • Re: 3, it doesn't do this for text based controls (e.g. Label). – SilverlightFox Feb 15 '12 at 17:08
  • Re: 4, .NET doesn't know the context, so it won't be encoding `string` properly. – SilverlightFox Feb 15 '12 at 17:09
0

Sanatizing input is rarely the correct choice. You should sanitize or encode at where it is used, because only there you know what needs to be encodes, escaped or removed.

In most cases manual sanitizing isn't necessary, when you use well designed APIs. But in some cases you still need to encode or validate manually, because you know more than the API. For example automatic html encoding output doesn't protect you, if the string is used inside a piece of javascript embedded in the html page.

<script>var text="@Model.UserControlledData";</script>

The automatic encoding rules fit html, not javascript strings, so this would be insecure.

CodesInChaos
  • 106,488
  • 23
  • 218
  • 262