18

I need to run a component using Apache Camel (or Spring Integration) under WAS ND 8.0 cluster. They both run some threads on startup, and stop them on shutdown normally. No problem to supply WAS managed threadpool. But that threads must run on single cluster's node at the same time. Moreover it must be high-available i.e. switch to other node when active node falls.

Solution I found - is WAS Partitioning Facility. It requires additional Extended Deployment licenses. Is it the only way, or there is some way to implement this using Network Deployment license only?

Thanks in advance.

karadeniz
  • 191
  • 7
  • 2
    Very good question. Running Camel in a load balanced WAS network deployment setup has a few quirks, in terms of transaction handler, class loading and managed threads, but is rather straight forward. Having a single camel instance failing over is in a Network Deployment setup is hard. You can use camel route policies to have multiple contexts alive, but keep certain routes started only on a single server. – Petter Nordlander Feb 02 '13 at 19:05
  • 1
    Really, forgot to mention: WAS 8.0 – karadeniz Feb 03 '13 at 12:04
  • 1
    Could't you elaborate a little about what those components do ? (I am trying to figure out if a JCA resource adapter is the right thing for you) – Aviram Segal Feb 04 '13 at 07:42
  • 1
    Great few-sentences description of Camel is here: http://stackoverflow.com/a/10836773/1871980 Roughly Spring Integration is just another implementation of the same. Both them need to launch threads to, say, check new files on SFTP periodicaly etc. Adoption to WAS scheduling could solve problem in this part, but involves some fundamental patching. Another issue is listening to JMS and routing messages to directory (or SFTP again) - it must be done only once for each message, not on every node. – karadeniz Feb 04 '13 at 11:45
  • 1
    For JMS only one thread on one node will handle a specific message – Aviram Segal Feb 04 '13 at 14:08
  • 1
    With WebSphere AS many of the EJB types can be configured to be run only on one cluster node. Just make sure you deploy to cluster, and after that take a closer look at the console. The default setting is to direct the queries to local EJBs, but if you change the settings they should be run only on one node. Probably happens from under the container settings, or the deployed EJB application's settings. Timers behave that way naturally, as apparently do JMS handlers. Also I believe session beans, WebSphere's work queue and timer features behave that way. – user918176 Feb 09 '13 at 22:18

1 Answers1

1

I think that there is not a feature that address this interesting requirement. I can imagine a "trick":

  1. A Timer EJB send a message on a queue (let's say 1 per minute)
  2. Configure a Service Integration Bus (SIB) with High Availability and No Scalability, so the HA Manager ensure that only one messaging engine (ME) is alive.
  3. Create a non-reliable queue for high performances and low resource consumption.
  4. The Activation Spec should be configured to listen only local ME.
  5. A MDB implement the following logic: when the message arrives, it check if the singleton thread is alive, otherwise it start the thread.

Does it make sense?

dmarrazzo
  • 175
  • 7