30

I am not being flip I really don't get it. I just read a whole bunch of material on them and I can't figure out the use case. I am not talking talking so much about the API for which the advantages over things like signal() are clear enough. Rather it seems RT signals are meant to be user space generated but to what end? The only use seems to be a primitive IPC but everything points to them being a lousy form of IPC (e.g. awkward, limited information, not particularly efficient, etc).

So where and how are they used?

Sam Protsenko
  • 14,045
  • 4
  • 59
  • 75
ValenceElectron
  • 2,678
  • 6
  • 26
  • 27

4 Answers4

25

First of all, note that Ben's answer is correct. As far as I can tell, the whole purpose of realtime signals in POSIX is as a realtime delivery mechanism for AIO, message queue notifications, timer expirations, and application-defined signals (both internal and inter-process).

With that said, signals in general are a really bad way to do things:

  • Signal handlers are asynchronous, and unless you ensure they do not interrupt an async-signal-unsafe function, they can only use async-signal-safe functions, which severely limits what they can do.
  • Signal handlers are global state. A library cannot use signals without a contract with the calling program regarding which signals it's allowed to use, whether it's allowed to make them syscall-interrupting, etc. And in general, global state is just A Bad Thing.
  • If you use sigwait (or Linux signalfd extension) rather than signal handlers to process signals, they're no better than other IPC/notification mechanisms, and still potentially worse.

Asynchronous IO is much better accomplished by ignoring the ill-designed POSIX AIO API and just creating a thread to perform normal blocking IO and call pthread_cond_signal or sem_post when the operation finishes. Or, if you can afford a little bit of performance cost, you can even forward the just-read data back to yourself over a pipe or socketpair, and have the main thread process asynchronously-read regular files with select or poll just like you would sockets/pipes/ttys.

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • 3
    Although I don't like the POSIX aio nearly as much as Win32 overlapped I/O, it's still far preferable to spinning up a thread for each operation. Talk about throwing the baby out with the bathwater. – Ben Voigt Jun 14 '11 at 15:56
  • 2
    Not for each operation. For many uses, one thread per file is enough. If you need multiple readers/writers on the same file at once, one thread per "user" of the file (rather than per access) should still suffice. glibc's AIO implementation implements it something like this anyway; it just puts the hideous POSIX AIO API on top of it, rather than giving you the freedom to make a good API. – R.. GitHub STOP HELPING ICE Jun 14 '11 at 16:00
  • The signal stuff is the lowest level. There are higher level abstractions that help you deal with it. Much better to use select() correctly and processes the I/O as required. – Martin York Jun 14 '11 at 16:39
  • 4
    `select` is useless on ordinary files. It always indicates that they're ready for read and write. This is not a design flaw; even if select only showed them readable when data could be read immediately from cache (or writable when free cache memory was available), there would be a race condition - after `select` returned, the cache availability status could change and `read`/`write` could sleep in kernelspace. One alternative solution, of course, would be to `mmap` and `mlock` (and `fallocate`, if writing) the part of the file you want to access, then use ordinary IO. – R.. GitHub STOP HELPING ICE Jun 14 '11 at 16:44
  • 1
    @R: But `mmap` and `mlock` are blocking, and I don't think there's any asynchronous version of these syscalls. – Ben Voigt Jun 14 '11 at 17:45
  • @R.. If I was reading normal files I would not use select() but neither would I worry about the signal handlers I would just use the files through the normal API. – Martin York Jun 14 '11 at 17:47
  • 4
    Reading them normally could cause long freezes in your program if the file is on slow media (e.g. an optical disc with scratches) or NFS. For interactive apps or servers that handle multiple clients from a single thread, this may be unacceptable. The problem with `mmap` and `mlock` sleeping in kernelspace can be solved using a dedicated thread (or even a separate process) to do the locking (if a shared mapping is locked from one process, it should always be swapped-in for other processes that map it too). – R.. GitHub STOP HELPING ICE Jun 14 '11 at 19:36
20

Asynchronous I/O.

Realtime signals are the mechanism for the kernel to inform your system when an I/O operation has completed.

struct aiocb makes the connection between an async I/O request and a signal number.

Ben Voigt
  • 277,958
  • 43
  • 419
  • 720
5

It's an old question, but still.

POSIX threads on Linux in glibc (NPTL) are implemented using two real time signals. They are hidden from user (by adjusting min/max number constants). All events where library call must be propagated to all threads (like setuid) are done via those: calling thread sends signal to all threads to apply the change, waits for acknowledgement and continues.

Lapshin Dmitry
  • 1,084
  • 9
  • 28
-2

There are other reasons to use the real time signals. I have an app that interacts with a variety of external devices, and does so through a combination of means (serial port IO, even direct addressing of some cards older than most people you know). This is, by definition, a "real time" app -- it interacts with the real world, in real world time, not in "computer time".

Much of what it does is in a daemon process that's in a main loop: handling an event, reading info, writing out results to serial ports, storing things in the database, and so on, and then looping around for another event. Other processes on the machine (user processes) read the info from the DB, display it, and so on. The user in these other processes can send various signals to the daemon to alert it of various conditions: stop, changed input data, and so on. For example, the user process sends a "stop" signal, the daemon's signal handler routine has about 2 lines of code, setting a flag variable. When the daemon gets a chance, and it's convenient, it stops. The "interrupt" code is very simple, quick, and non-invasive. But it serves the purpose, doesn't require complex IPC structures, and works just fine.

So, yes, there are reasons for these signals. In real time applications. If handled appropriately, they work just fine, thank you.

CLWill
  • 148
  • 1
  • 5
  • Note that the question was not about signals in general, but about POSIX "real-time" signals. The "stop" signal you mention is probably an ordinary signal, not a real-time one. – Alex D Jun 02 '15 at 19:11
  • The stop was but one example of the signals this app uses. As I noted, others are used to signal "data ready" and other conditions, in real time. – CLWill Jun 03 '15 at 18:58
  • 6
    OK, that is understood, but I still feel this answer does not relate to the OP's question. The point is not whether the signals are used "in real time", but whether they belong to a special class of signals called "POSIX real-time signals". These differ from ordinary signals in that they are never merged, are guaranteed to be delivered in the order they were raised, and can also be delivered with an extra argument (not just the signal number). If the signals used to control your daemon are of this type, then you could explain why. That is what the question is about. – Alex D Jun 03 '15 at 19:52
  • The OP asked what the use case was. I gave a use case. QED – CLWill Jun 03 '15 at 23:18