3

Similar questions have been asked about the nature of when to use POST and when to use GET in an AJAX request

Here: What are the advantages of using a GET request over a POST request?

and here: GET vs. POST ajax requests: When and how to use either?

However, I want to make it clear that that is not exactly what I am asking. I get idempotence, sensitive data, the ability for browsers to be able to try again in the event of an error, and the ability for the browser to be able to cache query string data.

My real scenario is such that I want to prevent my users from being able to simply enter in the URL to my "Compute.cshtml" file (i.e. the file on the server that my jQuery $.ajax function posts to).

I am in a WebMatrix C#.net web-pages environment and I have tried to precede the file name with an underscore (_), but apparently an AJAX request falls under the same criteria that this underscore was designed to prevent the display of and it, of course, breaks the request.

So if I use POST I can simply use this logic:

if (!IsPost)  //if this is not a post...
{
    Response.Redirect("~/") //...redirect back to home page.
}

If I use GET, I suppose I can send additional data like a string containing the value "AccessGranted" and check it on the other side to see if it equals this value and redirect if not, but this could be easily duplicated through typing in the address bar (not that the data is sensitive on the other side, but...).

Anyway, I suppose I am asking if it is okay to always use POST to handle this logic or what the appropriate way to handle my situation is in regards to using GET or POST with AJAX in a WebMatrix C#.net web-pages environment.

Community
  • 1
  • 1
VoidKing
  • 6,282
  • 9
  • 49
  • 81

3 Answers3

2

My advice is, don't try to stop them. It's harmless.

  • You won't have direct links to it, so it won't really come up. (You might want your robots.txt to exclude the whole /api directory, for Google's sake).
  • It is data they have access to anyway (otherwise you need server-side trimming), so you can't be exposing anything dangerous or sensitive.
  • The advantages in using GETs for GET-like requests are many, as you linked to (caching, semantics, etc)

So what's the harm in having that url be accessible via direct browser entry? They can POST directly too, if they're crafty enough, using Fiddler "compose" for example. And having the GETs be accessible via url is useful for debugging.

EDIT: See sites like http://www.robotstxt.org/orig.html for lots of details, but a robots.txt that excluded search engines from your web services directory called /api would look like this:

User-agent: *
Disallow: /api/
Scott Stafford
  • 43,764
  • 28
  • 129
  • 177
  • Okay, I guess I just felt "out-of-my-element" here and wasn't sure if there was an easily usable predefined function or methodology for this type of thing already. So if they browse to this page and it displays a blank screen with a long string of concatenated values, then they just shouldn't do that. Thanks for the answer, I wasn't really sure what the best practices were for this sort of thing :) – VoidKing May 10 '13 at 14:35
  • I have to leave and it won't let me accept for another 8 minutes, so I will just accept when I get back. Thanks for the quick answer! – VoidKing May 10 '13 at 14:37
  • Oh, one more thing. I would absolutely LOVE an example of how to include your aforementioned exclusion in the robots.txt file. I've never done that before. – VoidKing May 10 '13 at 14:38
  • Added. Also, you can see the other answers that offers some possibilities to block it, and they work, some of the time. But IMO, it adds complexity for no purpose. There's lots of web services that accept direct browser GETs on the web, like my favorite: https://data.mtgox.com/api/2/BTCUSD/money/ticker – Scott Stafford May 10 '13 at 16:06
  • Sorry to ask so many questions but when you say, "web services directory called /api" do you mean that, let's say, I have a directory at the root of my site called "AJAX Pages" my inclusion in the robots.txt file would be: `User-agent: *` and on the second line: `Disallow: AJAX Pages/`? Did I get that right? – VoidKing May 10 '13 at 16:21
  • Mike Brind's answer below is also a very good answer and very good to know. – VoidKing May 10 '13 at 16:42
  • I believe so. google 'robots.txt checker' to find a validator. – Scott Stafford May 10 '13 at 17:46
1

Similar to IsPost, you can use IsAjax to determine whether the request was initiated by the XmlHttpRequest object in most browsers.

if(!IsAjax){
   Response.Redirect("~/WhatDoYouThinkYoureDoing.cshtml");
} 

It checks the request to see if it has an X-Requested-With header with the value of XmlHttpRequest, or if there is an item in the Request object with the key X-Requested-With that has a value of XmlHttpRequest.

Mike Brind
  • 28,238
  • 6
  • 56
  • 88
  • Really? I am aware of that method, however, I have explicitly tried that and it didn't work at all (I was using GET). Does it only work with POST? Either way I will try again and let you know. – VoidKing May 10 '13 at 16:15
  • Awesome :) Now it is working (at least with POST). This is a good example of how I, personally testing code to found out how it works (especially when I don't fully understand the documentation), can cause myself problems... – VoidKing May 10 '13 at 16:23
  • The only other thing I can think of that was different about when I tested if before, was that I was using straight JavaScript (you know, the whole `loadXMLDoc` function), but now I try and only use jQuery's `$.ajax` function (which I find to be far superior) I just love the way jQuery handles many compatibility issues for me (like `.css("opacity", "0.5")` for example, which keeps me from writing a second call to `filter(alpha:opacity) for IE) – VoidKing May 10 '13 at 16:30
  • Okay, well, I tested it with GET and it appears that the `IsAjax` method doesn't apply with GET. That's why it didn't work with my tests before. So if I use `IsAjax` it needs to be POST (saying nothing of their lesser known cousins PUT and DELETE, which I won't even attempt to fry my brain trying to inquire about) – VoidKing May 10 '13 at 16:37
  • OHHHH Wait, I am wrong. It appears it DOES work with GET (I was testing with a variable that used `Request.Form` obviously NOT compatible with GET). Does `IsAjax` not work with JavaScript's old `loadXMLDoc` function? – VoidKing May 10 '13 at 16:46
  • IsAjax doesn't care about the HTTP verb. If the implementation of XmlHttpRequest adds the appropriate headers it will work. there is no standard loadXMLdoc function in Javascript. – Mike Brind May 10 '13 at 18:12
0

One way to detect a direct AJAX call is to check for the presence of the http_referer header. Directly typed URLs won't generate a referrer, but you still won't be able to differentiate the call from a simple anchor link.

(Just keep in mind that some browsers don't generate the header for XHR requests.)

OnoSendai
  • 3,960
  • 2
  • 22
  • 46