We have a hybrid webforms/asp.net application which does a lot of partial-page updates from javascript using jquery.
The typical (unsafe) pattern in our application's javascript is to respond to a user request to re-write part of the page with something like this:
$.ajax({
type: "GET",
url: urlVariableHere,
success: function (data) {
$("#elementIdHere").html(data);
},
error: function (XMLHttpRequest, ajaxOptions, ex) {
errorHandlerFunction(XMLHttpRequest);
}
"urlVariableHere" points to an MVC Controller method that returns a rendered MVC view. In other words, the Controller method returns a blob of raw HTML.
This pattern is unsafe because of the call to JQuery's html() method, which is vulnerable to a cross-site scripting attack. We now need this application to pass a Veracode static analysis, and this unsafe pattern is repeated several hundred times.
Hooman pointed out in his answer that if we are calling a Controller method which renders a View which does not use the Html.Raw method we are safe from an XSS attack. The problem is, we need to pass a Veracode static scan, and for internal reasons we cannot mark these flaws as "mitigated." For internal reasons the application must pass a static scan with zero mitigations.
What is the best (i.e. most time-economical) way to make this application safe, and still keep the ability to do partial-page updates from javascript? Right now I only see three alternatives, all of them huge efforts:
- Change every partial-page postback to a full-page postback.
- Change every ajax call to fetch JSON instead of HTML, and then safely create DOM elements from the JSON using safe methods like document.createElement(), element.setAttribute(), element.appendChild() and etc.
- Re-write the application to use a javascript framework (Angular, Vue) or library (React).
Am I missing an easier solution?