89

We are still in the design-phase of our project but we are thinking of having three separate processes on an embedded Linux kernel. One of the processes with be a communications module which handles all communications to and from the device through various mediums.

The other two processes will need to be able to send/receive messages through the communication process. I am trying to evaluate the IPC techniques that Linux provides; the message the other processes will be sending will vary in size, from debug logs to streaming media at ~5 Mbit rate. Also, the media could be streaming in and out simultaneously.

Which IPC technique would you suggestion for this application? http://en.wikipedia.org/wiki/Inter-process_communication

Processor is running around 400-500 Mhz if that changes anything. Does not need to be cross-platform, Linux only is fine. Implementation in C or C++ is required.

Ciro Santilli OurBigBook.com
  • 347,512
  • 102
  • 1,199
  • 985
RishiD
  • 2,098
  • 2
  • 26
  • 24
  • 53
    The Linux kernel provides the following IPC mechanisms: Signals, Anonymous Pipes, Named Pipes or FIFOs, SysV Message Queues, POSIX Message Queues, SysV Shared memory, POSIX Shared memory, SysV semaphores, POSIX semaphores, FUTEX locks, File-backed and anonymous shared memory using mmap, UNIX Domain Sockets, Netlink Sockets, Network Sockets, Inotify mechanisms, FUSE subsystem, D-Bus subsystem. For Most of my needs I use sockets. – enthusiasticgeek Nov 20 '12 at 18:57
  • 2
    @enthusiasticgeek D-Bus is done entirely in userspace. Some kernel guys are working on [kdbus](https://github.com/gregkh/kdbus) but it is still a work in progress. – new123456 May 06 '14 at 17:41
  • on an arm926ejs 200MHz processor, a method call and reply with two uint32 arguments consumes anywhere between 0 to 15 ms. average 6 ms. how other people see on other processors? – minghua Jul 31 '14 at 05:04
  • 1
    Possible duplicate of [Comparing Unix/Linux IPC](http://stackoverflow.com/questions/404604/comparing-unix-linux-ipc) This one may be too broad, and tends to degenerate to that one. – Ciro Santilli OurBigBook.com May 03 '16 at 14:36
  • For a review of "classic" Linux IPC mechanisms: see [here](http://tldp.org/LDP/lpg/node7.html) – Reblochon Masque Sep 20 '18 at 01:49

6 Answers6

83

When selecting your IPC you should consider causes for performance differences including transfer buffer sizes, data transfer mechanisms, memory allocation schemes, locking mechanism implementations, and even code complexity.

Of the available IPC mechanisms, the choice for performance often comes down to Unix domain sockets or named pipes (FIFOs). I read a paper on Performance Analysis of Various Mechanisms for Inter-process Communication that indicates Unix domain sockets for IPC may provide the best performance. I have seen conflicting results elsewhere which indicate pipes may be better.

When sending small amounts of data, I prefer named pipes (FIFOs) for their simplicity. This requires a pair of named pipes for bi-directional communication. Unix domain sockets take a bit more overhead to setup (socket creation, initialization and connection), but are more flexible and may offer better performance (higher throughput).

You may need to run some benchmarks for your specific application/environment to determine what will work best for you. From the description provided, it sounds like Unix domain sockets may be the best fit.


Beej's Guide to Unix IPC is good for getting started with Linux/Unix IPC.

Rupert Swarbrick
  • 2,793
  • 16
  • 26
jschmier
  • 15,458
  • 6
  • 54
  • 72
  • See also: [Comparing Unix/Linux IPC](https://stackoverflow.com/questions/404604/comparing-unix-linux-ipc/404622#404622) and [6 Linux Interprocess Communications](https://tldp.org/LDP/lpg/node7.html). – Gabriel Staples Jul 27 '21 at 20:21
  • *"I read a [paper](https://osnet.cs.binghamton.edu/publications/TR-20070820.pdf) on Performance Analysis of Various Mechanisms for Inter-process Communication that indicates Unix domain sockets for IPC may provide the best performance"* - **very bad article**, in the absence of such an important IPC technique as shared memory (shmem), it literally concludes that "unix domain sockets are better than other IPC techniques" – NK-cell Jul 06 '23 at 15:45
41

I would go for Unix Domain Sockets: less overhead than IP sockets (i.e. no inter-machine comms) but same convenience otherwise.

jldupont
  • 93,734
  • 56
  • 203
  • 318
  • I am trying this out but I am getting mixed results. If a couple of processes are trying to talk to my ipc server (unix socket server) - would I need multiplexing? – User9102d82 Jul 13 '19 at 18:27
  • @User9102d82: Yes, use the `select()` (or `poll()`) functions to achieve that. These can watch multiple file descriptors (e.g. your client sockets), blocking until one has data available for reading. See https://notes.shichao.io/unp/ch6/ for a good overview. – sigma Nov 12 '19 at 17:44
24

Can't believe nobody has mentioned dbus.

http://www.freedesktop.org/wiki/Software/dbus

http://en.wikipedia.org/wiki/D-Bus

Might be a bit over the top if your application is architecturally simple, in which case - in a controlled embedded environment where performance is crucial - you can't beat shared memory.

Dipstick
  • 9,854
  • 2
  • 30
  • 30
  • 4
    Dbus has performance issues in an embedded environment. It creates a lot of context switching because you create a message via dbus, send it to the kernel, then send it back out to dbus. There is a patch that reduces these context switches using a new socket type, called AF_BUS, but Red Hat has not applied the patch for some reason. – jeremiah Oct 25 '12 at 10:18
  • 6
    This design of many context switches points to dbus' original goal of being a service discovery bus and not an IPC mechanism. – jeremiah Oct 25 '12 at 11:32
  • @jeremiah: any specifics about the _performance issues in an embedded environment_? I did some profiling and online research, do not see an serious issue. See [here](http://stackoverflow.com/questions/25085727/what-dbus-performance-issue-could-prevent-it-from-embedded-system) – minghua Aug 14 '14 at 17:55
  • It depends on what you type of performance you're looking for of course. Things like bluez that use dbus apparently push lots of objects down the pipe when indexing audio for example. This can generate a lot of traffic and you'll likely see a performance hit. As an IPC mechanism it gets slow, at least in comparison to other POSIX IPC mechanisms that are purpose-built for embedded. kdbus aims to address some of these performance issues, but its still a new project. – jeremiah Aug 19 '14 at 11:06
  • @Dipstick D-bus is less than 50 stars. It's still a new project indeed. – John Jan 24 '22 at 03:22
  • @minghua D-bus is less than 50 stars. It's still a new project indeed. – John Jan 24 '22 at 03:23
  • @John what do you mean by "D-bus is less than 50 stars. It's still a new project indeed". What stars? How would that relate? – minghua Mar 12 '22 at 17:55
16

If performance really becomes a problem you can use shared memory - but it's a lot more complicated than the other methods - you'll need a signalling mechanism to signal that data is ready (semaphore etc) as well as locks to prevent concurrent access to structures while they're being modified.

The upside is that you can transfer a lot of data without having to copy it in memory, which will definitely improve performance in some cases.

Perhaps there are usable libraries which provide higher level primitives via shared memory.

Shared memory is generally obtained by mmaping the same file using MAP_SHARED (which can be on a tmpfs if you don't want it persisted); a lot of apps also use System V shared memory (IMHO for stupid historical reasons; it's a much less nice interface to the same thing)

MarkR
  • 62,604
  • 14
  • 116
  • 151
4

As of this writing (November 2014) Kdbus and Binder have left the staging branch of the linux kernel. There is no guarantee at this point that either will make it in, but the outlook is somewhat positive for both. Binder is a lightweight IPC mechanism in Android, Kdbus is a dbus-like IPC mechanism in the kernel which reduces context switch thus greatly speeding up messaging.

There is also "Transparent Inter-Process Communication" or TIPC, which is robust, useful for clustering and multi-node set ups; http://tipc.sourceforge.net/

jeremiah
  • 844
  • 9
  • 15
1

Unix domain sockets will address most of your IPC requirements. You don't really need a dedicated communication process in this case since kernel provides this IPC facility. Also, look at POSIX message queues which in my opinion is one of the most under-utilized IPC in Linux but comes very handy in many cases where n:1 communications are needed.

c0der
  • 19
  • 4