1

I have a resource, say a @POST method serving the clients. It doesn't run on any external parameters, not even the caller URL (we're leaving that to the firewall) or the user authentication.

However, we don't want to handle user requests simultaneously. When a request1 is being processed and the method hasn't just yet returned, a request2 coming in should receive a response of status 309 (or whatever status code applies) and shouldn't get served.

Is there a way of doing this without getting into anything on the server back-end side like multithreading?

I'm using Tomcat 8. The application will be deployed on JBoss, however this wouldn't effect the outcome(?) I used Jersey 1.19 for coding the resource.

This is a Q relevant to How to ignore multiple clicks from an impatient user?.

TIA.

Community
  • 1
  • 1
Roam
  • 4,831
  • 9
  • 43
  • 72
  • 1
    Can you tell us a little more about why you want to reject simultaneous calls? I can think of many use cases for serializing processing of requests, that is, cuing up responses one behind another, but why is the requirement to reject requests while any request is processing? – Nate Vaughan Aug 20 '16 at 21:17
  • @jakeblues pls see Q pointed in this Q. there can be many other cases. such a thing would take the burden off of the backend. feels that a servlet container offers this option-- don't know how/where. – Roam Aug 20 '16 at 21:44
  • @jakeblues looking to find out more than anything else. i'm not that crafty at web services. – Roam Aug 20 '16 at 21:45
  • How is this different from your [other similar question](http://stackoverflow.com/questions/39057416/how-to-ignore-multiple-clicks-from-an-impatient-user)? – Christopher Schultz Aug 23 '16 at 22:08

1 Answers1

0

Depending on what you want to achieve, yes, it is possible to reject additional requests while a service is "in use." I don't know if it's possible at the servlet level; servlets are designed to spin up processes for as many requests as possible so that, say, if one user requests something simple and another requests something difficult, the simple request can get handled while the difficult request is processing.

The primary reason you would probably NOT want to return an HTTP error code simply because a service is in use is that the service didn't error; it was simply in use. Imagine trying to use a restroom that someone else was using and instead of "in use" the restroom said "out of order."

Another reason to think twice about a service that rejects requests while it is processing any other request is that it will not scale. Period. You will have some users have their requests accepted and others have their requests rejected, seemingly at random, and the ratio will tilt toward more rejections the more users the service has. Think of calling into the radio station to try to be the 9th caller, getting a busy tone, and then calling back again and again until you get through. This works for trying to win free tickets to a concert, but would not work well for a business you were a customer of.

That said, here are some ways I might approach handling expensive, possibly duplicate, requests.

If you're trying to avoid multiple identical/simultaneous requests from an impatient user, you most likely have a UX problem (e.g. a web button doesn't seem to respond when clicked because of processing lag). I'd implement a loading mask or something similar to prevent multiple clicks and to communicate that the user's request has been received and is processing. Loading/processing masks have the added benefit of giving users an abstract feeling of ease and confidence that the service is indeed working as expected.

If there is some reason out of your control why multiple identical requests might get triggered coming from the same source, I'd opt for a cache that returns the processed result to all requests, but only processes the first request (and retrieves the response from the cache for all other requests).

If you really really want to return errors, implement a singleton service that remembers a cache of some number of requests, detects duplicates, and handles them appropriately.

Remember that if your use case is indeed multiple clicks from a browser, you likely want to respond to the last request sent, not the first. If a user has clicked twice, the browser will register the error response first (it will come back immediately as a response to the last click). This can further undermine the UX: a single click results in a delay, but two clicks results in an error.

But before implementing a service that returns an error condsider the following: what if two different users request the same resource at the same time? Should one really get an error response? What if the quantity of requests increases during certain times? Do you really want to return errors to what amounts to a random selection of consumers of the service?

Nate Vaughan
  • 3,471
  • 4
  • 29
  • 47