4

I have seen the question: Communication between two separate Java desktop applications (answer: JGroups) and I'm thinking about implementing something with JavaGroups or straight RMI but speed is of the essence. I'm not sending large amounts of data around (content of MIDI Messages, so 3 bytes each, no more than say two messages every three milliseconds) and this will be all on the same machine. Is it daft to think that RMI/JGroups on the same physical machine will be slow?

(My thought is that I can't afford more than 1ms of latency, since I've already got some, but I'm not sure how to best talk about speed in this context.)

I guess my real question is: are there any options for interapp communication in Java that go through something FASTER than TCP/IP? I mean things already implemented in Java, not JNI possibilities that I'd need to implement :)

I know, don't optimize early and all that, but also better safe than sorry.

Community
  • 1
  • 1
Dan Rosenstark
  • 68,471
  • 58
  • 283
  • 421
  • What is daft here is even contemplating anything other than `Socket.getOutputStream.write(byte[])` for data consisting of three bytes You don't have objects, and you don't have any apparent RPC semantics, so there is no reason to even consider anything else. Possibly you should be considering UDP multicast. – user207421 Jul 17 '18 at 02:28

4 Answers4

6

are there any options for interapp communication in Java that go through something FASTER than TCP/IP?

Not significantly ... AFAIK.

But I think you are thinking about this the wrong way. Assuming that you are moving small messages, the main performance killer will be the overheads of making a call rather than the speed at which bytes are moved. These overheads include such things as the time taken to make system calls, to switch process contexts on the client and server side, to process message packet headers within the kernel and to route packets. And any synchronous RPC-like interaction entails making a call an waiting for the reply; i.e. the app -> server -> round trip time.

The way to get greater throughput is to focus on the following:

  • reducing the number of RPCs that the application requires; e.g. by combining them to be more coarse-grained, and

  • looking at ways to turn synchronous interactions into asynchronous interactions; e.g. using message based rather than RPC based technologies.

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
  • Thanks Stephen, I think this is right on and a great answer. I wouldn't actually be talking bi-directionally: one program would use the other program to enter into an application context (the VST host, sorry about the made-up terminology) from which it itself was not run. Also, while in my particular case only a tiny bit of aggregation is possible (messages needed as they come in), I think you're right about aggregation and making async sync. – Dan Rosenstark Jan 15 '10 at 13:08
2

If speed is of the essence, you should make the call in the same thread. You won't get as fast as this using a network.

However, assuming speed is not quite that important you can perform Java RMI calls in about 500 micro-seconds and using custom coded RPC you can make calls over loopback in about 24 micro-seconds. Even passing data between threads in the same JVM can take 8 micro-seconds.

You need to decide how much time you are willing to allow to place a network call. You also need to decide if the time to start the call is critical or the time to return a result. (Often the latter has double the overhead)

Note: I am talking micro-second here, not milli-seconds. I would ignore any options which take multiple milliseconds for your purposes.

Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
  • Wow, not sure how I missed this answer back then. So RMI is basically blazing fast. – Dan Rosenstark Dec 10 '11 at 17:40
  • @Yar Its all relative. 0.5 ms may be more than fast enough, in which case RMI is likely to be the best option. With tuning the system and JVM you can socket get the round trip latency down to 6 micro-seconds over a loop back socket and with shared memory, you can get it down to 200 nano-seconds. – Peter Lawrey Dec 10 '11 at 20:07
  • True that it's all relative, but adding that kind of latency just ONCE to a system in which a user is interacting with a music program, say, is totally fine, assuming it's just knobs and buttons. Even for drums, as long as the latency is relatively consistent, I don't think 25 microseconds is noticeable. – Dan Rosenstark Dec 11 '11 at 20:12
  • To a human, anything shorter than 1/50th of a second (20 ms) isn't noticeable. TZ Screens in some countries update at 50 Hz and some cinema's update at 42 Hz and its not noticeable. – Peter Lawrey Dec 11 '11 at 21:50
  • 1
    @Peter Lawrey - visual contiguity is very different however from audio imaging. According to http://www.silcom.com/~aludwig/EARS.htm, something slower than 200Hz can cause apparent echoing. No idea what this application will ultimately produce, but audio accuracy is extremely important. – Jé Queue Feb 16 '12 at 00:03
1

This benchmark is about two years old, but it shows that the only popular Java remoting solution faster than RMI is Hessian 2(which is still in beta I believe).

However if your messages are only in single digit bytes, using any remoting solution seems like overkill, especially if the processes are on the same machine. I'd recommend consolidating them into a single process if possible. You could also consider just using plain old Java sockets.

Jason Gritman
  • 5,251
  • 4
  • 30
  • 38
  • Definitely right on, plain-old-Java-sockets might do it. Still I'm worried about the stack underneath (TCP/IP?)... how much latency does it add to one app to have it be two apps talking through, say, plain old Java sockets? – Dan Rosenstark Jan 15 '10 at 02:24
  • Sorry, didn't mention: I would combine them into a single process if I could. But I can't, since one will be a standalone app and the other a VST plugin. – Dan Rosenstark Jan 15 '10 at 02:25
  • 1
    I really couldn't say how much latency using sockets will add. It's not going to be much, but I can't say exactly how much to expect when we're talking such small units of time. Using some sort of shared memory model to communicate should be faster, but I'm not sure how to do that outside of JNI. That's not to say that there's something out there that would work though. However, it seems like it's a design flaw if your application is that susceptible to latency. You may want to consider using some of the buffering capabilities of the java.nio classes to help mitigate that. – Jason Gritman Jan 15 '10 at 03:09
  • Excellent comment, thanks for the dialogue. I'm also guessing that it would be very fast. Not sure how it works now, but back in the day the EJB container talked to the servlet container with RMI, so the network latency on the same box should not be much. – Dan Rosenstark Jan 15 '10 at 13:00
0

Is it daft to think that RMI/JGroups on the same physical machine will be slow?

If your machine is decent probably yes :) If you're running on a machine with tons of processes eating CPU etc then things might be different. As always the best way to find out if you would experience the same thing as me is to test it.

The following is the time in milliseconds taken using nanoTime in the same JVM to send the string "123" using rmi and on the server concat it with "abc" to get "123abc" and return it.

Cold JVM: Approximately 0.25 millisecond latency

0.250344
0.262695
0.241540
0.282461
0.301057
0.307938
0.282102

Warm JVM: Approx 0.07 millisecond latency.

0.87916
0.72474
0.73399
0.64692
0.62488
0.59958
0.59814
0.66389

So you would be well within 1 millisecond if the RMI server and client is running locally.

Harry
  • 11,298
  • 1
  • 29
  • 43