0

The scope/context of this question: I am to develop a Java/Java EE based distributed server-side application that is scalable (scale-up, rather than scale-out).

My application comprises of servlets utilizing multiple instances of distributed back-end services for processing client requests. If I need to achieve more throughput, I want to be able to just add more instances of these distributed services (JVMs on the same or another machine) and (expect to) see an increase in throughput.

To achieve this, I was thinking of a loosely-coupled asynchronous system. I thought I would use Async Servlets (servlet 3.0) and an application-managed thread-pool that places client requests on JMS queues, which would be picked by one of the distributed service instances and processed. The responses can be relayed back to the client using JMS, from the service instances to a response-thread in the servlet container.

However, an asynchronous system seems to be (obviously) more complex than a synchronous one (ex: error-handling and error-relaying to the client, request tracking etc). I am also worried about the future maintainability of the design/code.

So, a question arises Does it make sense to do this synchronously, while still remaining distributed, scalable and loosely-coupled ? If the answer is yes, then pls also share possible ways of achieving this (while remaining 'constructive').

If I can do this well in a synchronous way, then it will simplify the entire system. I dont want to add complexity to the system unnecessarily.

(Assuming it makes sense) One possible implementation I could think of is using RMI. For ex: A service registry for the distributed service instances to register and have a load-balancer distribute the RMI calls across all the available instances. But it feels to be a old-generation solution. Are there any better options available ?

Edit: Other details about the scope of this question:

  • The client-side is browser-based does not demand an asynchronous server-side.
  • I dont need server-push.
  • At any time, I wont have more outstanding requests than max-worker-threads of the popular web servers (even Apache).
  • For the above reasons, the use-cases mentioned in a related question dont seem to apply to my scenario.
Community
  • 1
  • 1
2020
  • 2,821
  • 2
  • 23
  • 40
  • 1
    What's the actual function of the application besides including all kinds of buzzwords in the question? How many queries per second are we talking about? Do the backend services already exist? If so, what protocols are they speaking? – Philipp Reichart Apr 17 '13 at 19:55
  • I didnt want to taint the question with the application/domain. Hence, I left it out. As I mentioned in the qn, At any time, I wont have more outstanding requests than max-worker-threads of the popular web servers (even Apache). The Backend services are Java-based and are yet to be developed. So, am free to choose what they speak. Thanks ! – 2020 Apr 17 '13 at 20:00

1 Answers1

1

Loose coupling and distribution are independent of whether processing is synchronous or asynchronous.

With scalability, the matter is more complex. In a synchronous model, you will need one thread per pending request. If you need to scale to really high load (say, thousands of concurrent requests per server), an asynchronous model may scale better. To reap the benefit of that however, the entire processing, starting from the handling of incoming connections, needs to be done in an asynchronous way. There is little point to have a synchronous request processing thread delegate to a asynchronous thread pool, and blocking until that thread pool has computed the result - after all, the request thread could just as well have done the work himself.

If you need to return a response, I'd therefore go for synchronous request processing whenever scalabity permits (which it usually does).

Edit: There are numerous ways to talk to the distributed backend servers. You might simply use EJB (which, if I recall correctly, uses RMI under the hood). Or, you might use webservices behind a load balancer.

2020
  • 2,821
  • 2
  • 23
  • 40
meriton
  • 68,356
  • 14
  • 108
  • 175
  • Yes. I agree that if I were to go the async way, it should start at servlet processing. Thats why I explicitly mentioned servlet processing in the title itself. My question is HOW to achieve this synchronously while keeping the other benefits. May be you left out the HOW, because there are tons of ways to achieve it. Even then, pls state a couple of options to guide me. Thanks ! – 2020 Apr 17 '13 at 20:08
  • thanks! I was thinking there may be nicer ways to achieve this now. I am still feeling that there may be some new developments in this area. – 2020 Apr 18 '13 at 17:21
  • "There is little point to have a synchronous request processing thread delegate to a asynchronous thread pool, and blocking until that thread pool has computed the result - after all, the request thread could just as well have done the work himself." : Doing all the processing in the request thread would result in a monolithic design. Isnt it ? – 2020 May 01 '13 at 17:57
  • Not knowing how you define "monolithic design" I can neither answer that nor assess whether a "monolithic design" would be bad in this case. – meriton May 01 '13 at 18:25
  • Yes. I realized that this is a subjective thing and requires a long discussion to understand both sides. No worries ! You did answer my main question anyways. So, accepting it. – 2020 May 01 '13 at 18:35