There are other reasons to use the real time signals. I have an app that interacts with a variety of external devices, and does so through a combination of means (serial port IO, even direct addressing of some cards older than most people you know). This is, by definition, a "real time" app -- it interacts with the real world, in real world time, not in "computer time".
Much of what it does is in a daemon process that's in a main loop: handling an event, reading info, writing out results to serial ports, storing things in the database, and so on, and then looping around for another event. Other processes on the machine (user processes) read the info from the DB, display it, and so on. The user in these other processes can send various signals to the daemon to alert it of various conditions: stop, changed input data, and so on. For example, the user process sends a "stop" signal, the daemon's signal handler routine has about 2 lines of code, setting a flag variable. When the daemon gets a chance, and it's convenient, it stops. The "interrupt" code is very simple, quick, and non-invasive. But it serves the purpose, doesn't require complex IPC structures, and works just fine.
So, yes, there are reasons for these signals. In real time applications. If handled appropriately, they work just fine, thank you.