5

Every now and then I get myself into a position where I need to send quite large ajax GET requests from my java script client to my application (running IIS 7). If the URL is longer than 2048 characters you get an exception by default. The easy workaround for this has been to increase the maxQueryStringLength using web.config.

My question is if there are any good reasons why you should NOT go down this path, and if it is in fact is considered a hack? I have read something about different browsers limiting the number of characters in the address field, but if you're only using ajax, that may not be a a problem worth considering?

I know that you should in many cases consider using POST instead when you want to pass large amounts of data in the request, but sometimes that is not an option. For instance when your URL is returning a file for the user to download.

One specific example of where I have had to increase the maxQueryStringLength is: The user requests some locations in a map that are restricted by a polygon. If you want to send this polygon in the URL you will easily exceed the max URL length.

Imad Alazani
  • 6,688
  • 7
  • 36
  • 58
Knut Marius
  • 1,588
  • 18
  • 40
  • 1
    Also see http://security.stackexchange.com/questions/20637/is-it-harmful-to-allow-http-requests-with-very-long-querystrings-in-iis – Spongeboy Oct 21 '13 at 22:54

3 Answers3

4

Among other things it is a security measure...

Another point is that not all clients (i.e. browsers) supports lengths above 2048.

For a very detailed explanation see https://stackoverflow.com/a/417184/847363

IF you are in an Intranet situation AND have control over clients (browsers+versions) and server THEN it might be ok... for an application "in the wild" I would strongly recommend to use POST instead.

Community
  • 1
  • 1
Yahia
  • 69,653
  • 9
  • 115
  • 144
2

maxQueryStringLength is (probably) being used as safeguard against DDoS/buffer exhaustion attacks.

juhan_h
  • 3,965
  • 4
  • 29
  • 35
1

I don't see how this would immediately compromise on security in a big way. Why should 2047 be safe and 2049 be unsafe? IIS and ASP.NET are of course programmed to not overrun their memory buffers because that would be a security problem. Managed code is also immune to buffer overruns.

As most applications don't need such large URLs 2048 is a wise default in my opinion.

You can probably increase the limit without consequences.

usr
  • 168,620
  • 35
  • 240
  • 369
  • This is the feeling I also had from before. I felt it was one of those things that could not do much harm if used in a sensible way, but still some people always want to make you feel like you do something horrible as long as you're not doing it exactly according to the book (in this case, using POST instead of GET). This particular application I am working on now is for intranet use only, where we can expect only newer browsers, and where the risk of DDoS attacks can be considered as minimal. – Knut Marius Aug 12 '13 at 12:03
  • DDoS'ing an application with long URLs is way harder than bringing it down with application-level problems like repeatedly requesting an expensive page. It is so hard to be practically DDoS safe that you can just give up on that. – usr Aug 12 '13 at 12:30