25

I have a web-app where when the user submits a request, we send a JMS message to a remote service and then wait for the reply. (There are also async requests, and we have various niceties set up for message replay, etc, so we'd prefer to stick with JMS instead of, say, HTTP)

In How should I implement request response with JMS?, ActiveMQ seems to discourage the idea of either temporary queues per request or temporary consumers with selectors on the JMSCorrelationID, due to the overhead involved in spinning them up.

However, if I use pooled consumers for the replies, how do I dispatch from the reply consumer back to the original requesting thread?

I could certainly write my own thread-safe callback-registration/dispatch, but I hate writing code I suspect has has already been written by someone who knows better than I do.

That ActiveMQ page recommends Lingo, which hasn't been updated since 2006, and Camel Spring Remoting, which has been hellbanned by my team for its many gotcha bugs.

Is there a better solution, in the form of a library implementing this pattern, or in the form of a different pattern for simulating synchronous request-reply over JMS?


Related SO question:

Community
  • 1
  • 1
joshwa
  • 1,660
  • 3
  • 17
  • 26

5 Answers5

4

In a past project we had a similar situation, where a sync WS request was handled with a pair of Async req/res JMS Messages. We were using the Jboss JMS impl at that time and temporary destinations where a big overhead.

We ended up writing a thread-safe dispatcher, leaving the WS waiting until the JMS response came in. We used the CorrelationID to map the response back to the request.

That solution was all home grown, but I've come across a nice blocking map impl that solves the problem of matching a response to a request.

BlockingMap

If your solution is clustered, you need to take care that response messages are dispatched to the right node in the cluster. I don't know ActiveMQ, but I remember JBoss messaging to have some glitches under the hood for their clusterable destinations.

maasg
  • 37,100
  • 11
  • 88
  • 115
  • Clustering shouldn't be an issue with ActiveMQ in this case, since all queues per default are clustered and "global" within the cluster. There is no need to manually forward messages to the correct node. – Petter Nordlander Aug 01 '12 at 19:52
  • @Petter I think the problem arises if the *requesting web service* is clustered, not the MQ. I guess you'd have to have a global response queue, and all the nodes use the selector on the JMSCorrelationID? – joshwa Aug 01 '12 at 22:21
4

I would still think about using Camel and let it handle the threading, perhaps without spring-remoting but just raw ProducerTemplates.

Camel has some nice documentation about the topic and works very well with ActiveMQ. http://camel.apache.org/jms#JMS-RequestreplyoverJMS

For your question about spinning up a selector based consumer and the overhead, what the ActiveMQ docs actually states is that it requires a roundtrip to the ActiveMQ broker, which might be on the other side of the globe or on a high delay network. The overhead in this case is the TCP/IP round trip time to the AMQ broker. I would consider this as an option. Have used it muliple times with success.

Petter Nordlander
  • 22,053
  • 5
  • 50
  • 84
3

A colleague suggested a potential solution-- one response queue/consumer per webapp thread, and we can set the return-address to the response queue owned by that particular thread. Since these threads are typically long-lived (and are re-used for subsequent web requests), we only have to suffer the overhead at the time the thread is spawned by the pool.

That said, this whole exercise is making me rethink JMS vs HTTP... :)

joshwa
  • 1,660
  • 3
  • 17
  • 26
  • Hi @joshwa. I am facing same problem, but i didn't understand how you solved the problem? would you enlighten me? – Mohsen Heydari Sep 04 '13 at 16:08
  • @M.Heydari we ended up using a asynchronous callback model with shared persistent state. I still like the idea of one response queue per thread, keeping the state in a ThreadLocal, but we ended up going a different route. – joshwa Sep 04 '13 at 19:54
  • @joshwa Hi Joshwa, I'm interesting on using the async callback model to let JMS woking on sync way, could you please share a bit more comments in this stuff? thanks. – Liping Huang Oct 17 '14 at 03:19
0

I have always used CorrelationID for request / response and never suffered any performance issues. I can't imagine why that would be a performance issue at all, it should be super fast for any messaging system to implement and quite an important feature to implement well.

http://www.eaipatterns.com/RequestReplyJmsExample.html has the tow main stream solutions using replyToQueue or correlationID.

ams
  • 60,316
  • 68
  • 200
  • 288
  • The problem isn't using the CorrelationID (which is fundamental), it's the overhead of spinning up the temporary destination or consumer. – joshwa Aug 01 '12 at 22:22
  • Is there a reason why you can't use a single response queue with multiple consumers? – ams Aug 02 '12 at 08:41
  • 1
    @ams the only downside of a using single response queue with filters on co-relation id, is that any rogue consumer can potentially bring down the whole architecture. – demesne Feb 07 '14 at 09:55
0

It's an old one, but I've landed here searching for something else and actually do have some insights (hopefully will be helpful to someone).

We have implemented very similar use-case with Hazelcast being our chassis for cluster's internode comminication. The essense is 2 datasets: 1 distributed map for responses, 1 'local' list of response awaiters (on each node in cluster).

  • each request (receiving it's own thread from Jetty) creates an entry in the map of local awaiters; the entry has obviously the correlation UID and an object that will serve as a semaphore
  • then the request is being dispatched to the remote (REST/JMS) and the original thread starts waiting on the semaphore; UID must be part of the request
  • remote returns the response and writes it into the responses map with the correlated UID
  • responses map is being listened; if the UID of the newly coming response is found in the map of the local awaiters, it's semaphore is being notified, original request's thread is being released, picking up the response from the responses map and returning it to the client

This is a general description, I can update an answer with a few optimizations we have, in case there will be any interest.

GullerYA
  • 1,320
  • 14
  • 27