I need to update multiple processes with several different pieces of data, at varying rates, but as fast as 10 Hz. I don't want the receiving processes to have to actively get this data, but rather have it pushed to them, so that they only have to do anything about the new data when there actually is any (no polling).
I'm only sending probably a few bytes of data to each process. The data being transmitted will not likely need to be stored permanently, at least not before being received and processed by the recipients. Also, no data is updated less frequently than once every few seconds, so receiver crashes are not a concern (once a crashed receiver recovers, it can just wait for the next update).
I've looked at unix domain sockets and UDP and a little bit at pipes and shared memory, but it seems that they don't quite fit what I'm trying to do:
- Domain sockets require the sender to send a separate message to each recipient (i.e., no broadcasting/multicasting)
- Shared memory has the disadvantage of having the clients check that data has been updated (unless there's a mechanism I'm not familiar with that can notify them)
- UDP doesn't guarantee that the messages will arrive (maybe not likely a problem for communication on the same computer?), and I have some concern about the overhead from the network stack (which domain sockets doesn't have)
The concern about TCP (and other protocols that support inter-device communication) is that there is functionality that's not needed for interprocess communication on a single device, and that that could create unnecessary overhead.
Any suggestions and direction to references and resources are appreciated.