0

Possible Duplicate:
REST API - why use PUT DELETE POST GET?

I asked this question. But I still don't understand why we need to utilize different HTTP requests: DELETE/PUT/POST/GET in order to build nice API

Wouldn't it be a lot simpler to pass all information in request parameters and have a SINGLE ENTRY-POINT for your api?:

GET www.example.com/api?id=1&method=delete&returnformat=JSON
GET www.example.com/api?id=1&method=delete&returnformat=XML

or

POST www.example.com/api {post data: id=1&method=delete&returnformat=JSON}
POST www.example.com/api {post data: id=1&method=delete&returnformat=XML}

and then - we can handle all methods and data internally without the need for hundreds of urls...

how would you call this type of API - It's not REST apparently, it's not SOAP. then - what is it?

UPDATE I'm not proposing any new standards here. I merely asking a question in order to better understand why web services work the way they work.

UPDATE 2 Hmm. Ok - after googling around for some time and looking at various API's - it looks like that this approach is most close to that of JSON-RPC. it looks interesting. it's implemented in yahoo mail for example: yahoo mail json-rpc api

Community
  • 1
  • 1
Stann
  • 13,518
  • 19
  • 65
  • 73
  • I would name this type of API or URL "ugly". You need to use different HTTP requests because this is what these requests were created and intended for. – Andrea Jan 01 '11 at 18:31
  • Your fooling yourself if you think that your system has 'less' urls. You are trading in a simple path for more complex query params. The rest is subjective ... – nate c Jan 01 '11 at 18:46
  • 1
    AFAIK, SOAP over HTTP uses POST method for all web-service requests. But this works since SOAP is yet another protocol (and abstraction) which is independent of its underlying protocols which certainly isn't the case with pure HTTP services (or RESTful services as we should be calling them). – Sanjay T. Sharma Jan 01 '11 at 18:53
  • hmm. it looks like what I'm trying to describe here is best described as: json-rpc – Stann Jan 01 '11 at 21:44

3 Answers3

3

The reason for URLs and the http methods is allow intermediaries to have a basic understanding of what the request is doing. The REST architectural style is a layered architecture that allows for other components to sit in-between the client and the origin server. These components could be proxies, caches, firewalls, load balancers, almost anything you want. The URL is a way to communicate to the intermediary, what you are working on, and the HTTP method is a crude explanation to the intermediary of what you are doing.

Without the URL and the HTTP method a cache like Squid or nginx just could not work. They would not know what resource a user is trying to access and the would not know when to invalidate the cached entry.

If you have a system that has no intermediaries then you could do exactly what you are describing with very little negative side-effects. However, before you think you are not using any intermediaries, realize that on a Windows machine, web requests are routed through the WinINetCache which is a HTTP intermediary that lives on the client machine. I would be surprised if other operating systems did not have equivalent functionality.

The use of the layered component architecture is a commonly ignored part of REST, but when used to its potential can be very valuable. Ask the Stack Overflow developers.

Another key issue to address is that you are, not surprisingly, making the assumption that REST is about creating APIs. REST is actually about building distributed systems. There is no limitation to the number of logical servers that could participate in a REST system. If you consider the stackoverflow site again, the images come from a different set of servers than the javascript libraries which come from another set of servers than the actual site content.

To define a single endpoint where all the data should come from is seriously constraining your ability to partition the applications resources. RESTful clients should not be coupled to a single entry point into the system, they should be ignorant of the location of the resources and should simply follow URLs that have been provided them by the server on previous requests. This allows an distributed system to evolve over time where initially it is hosted at a single location and as the requirements change it can be moved and split across many servers. You just can't do this if your client is tied to a single entry point.

Darrel Miller
  • 139,164
  • 32
  • 194
  • 243
2

I have had to work with applications that designed their api as you propose. I now write REST APIs, because of my experiences with the older-style APIs. What you're proposing is what used to be pretty common practice about 10 years ago. The web has since learned and now knows better.

In the end, the way you propose writing the API is not easier. It's harder. For everyone. Manipulating long query strings and using nothing but GET requests is cumbersome to write, harder to debug, and doesn't actually buy you anything over using a REST model. Having a single entry point in an application of any complexity is not a win -- it's a loss. Ever try to sift logs of an application like that to find something meaningful? It can be done, but I'd rather just find a "DELETE" in my logs than 'method=delete'. In reality, doesn't 'method=delete' seem a little redundant when you know that HTTP already has a DELETE method? Why write code to implement something your web server MUST support in order to even claim it supports HTTP? That's just silly!

Writing a REST API, in my experience, has always meant less code, a more straightforward implementation, and one that is much easier both to test and to debug.

From the standpoint of the person writing code against your API, the same benefits apply. Less code, more straightforward, easier to test. When I work with coders writing against my API who are having issues, determining the source of the issue typically involves comparing the output of a 'curl -XDELETE' call with the output of their code. No, really -- that's it. If curl works and their code doesn't, it generally removes my API as the source of the problem.

There's also no messy parsing of information in the body of the HTTP request. In a lot of cases, the calling code can get the most important information from the headers. If you call a PUT or DELETE method, you mainly just want to know if it succeeded, in which case you read the HTTP status code in the header. This also has the side effect of making things faster, because there is no parsing to do outside of the header in those cases.

If you've only ever written APIs the way you propose, I can kind of understand the hesitance, but you will find that proposal silly the first time you deploy a real, production application using REST.

In short, a single entry point isn't simpler, isn't more efficient, and has zero benefit (and only more problems) when compared to a REST API.

jonesy
  • 3,502
  • 17
  • 23
  • 1
    Using aPUT and DELETE from a browser is full of problems. Web servers often have those methods disabled. Firewalls often filter those methods. The constraints of resource identification and uniform interface have a lot more to do with visibility and self-descriptiveness than they do with ease of server-side development work. – Darrel Miller Jan 01 '11 at 18:55
  • They're examples. Also, you clearly won't deploy a REST service on a web server that disables the methods you need, behind a firewall that filters them. Also, REST APIs aren't just for browsers. – jonesy Jan 01 '11 at 19:11
  • oh, and the bits about visibility and descriptiveness were covered in answers to the OPs original post, and he didn't get it. He appears to want some more tangible, less theoretic reasons to go in that direction. – jonesy Jan 01 '11 at 19:12
  • 1
    @jonesy If you write a product to sell, you don't control the server it gets deployed on. If you deploy a service onto the web, you don't control the firewalls that sit between you and the client. I understand that web browsers are not the only user agents out there, but they are a big portion. – Darrel Miller Jan 01 '11 at 19:29
  • @jonesy When I talk about visibility and self-descriptiveness, I am not talking about the readability of URIs. URIs are opaque to system components. The issue of idempotent/safe requests is however definitely an important part of the self-descriptiveness of a request, but not the only part. – Darrel Miller Jan 01 '11 at 19:32
  • @jonesy - "visibility and descriptiveness"? api's are by programmers for programmers. I guess it's also subjective but IMHO setting "method:getUserById" parameter inside of a payload designated to a single API endpoint is more visible and descriptive then using specific url with various request types. – Stann Jan 01 '11 at 22:13
0

Utilizing the HTTP verbs is part of what makes something RESTful. By using querystring parameters to specify the action you want to do on a resource it's no longer REST.

REST makes use of the HTTP verbs because they're there, they're standardised, and you don't have to figure out what the method name may be, which could vary from API to API. Of course you could make up a new ANDREful approach and specifiy that the method parameter must always be called method and it can only consist of certain values.

Would it be easier? That's too subjective. What would you name it? Whatever you want.

blowdart
  • 55,577
  • 12
  • 114
  • 149
  • how is having a single API entry-point subjective? – Stann Jan 01 '11 at 18:38
  • 1
    I am hoping you meant to say "By using querystring parameters [to specify the HTTP method] it's no longer REST" as opposed to outlawing query string parameters completely. REST is perfectly happy to let you use query string parameters to identify resources. – Darrel Miller Jan 01 '11 at 18:57
  • Everything is subjective Andre. REST is just a way of doing things, so is SOAP. – blowdart Jan 01 '11 at 19:07
  • Oops, yup, thanks Darrell, updated. – blowdart Jan 01 '11 at 19:08