0

I have the following code:

//includes...

#define SIG_INT 0

//prints out <Text> when SIG_INT is received from siginfo_t*
void handler(int, siginfo_t*, void*);

int main()
{
    pid_t ambulance1 = 0;
    pid_t ambulance2 = 0;

    struct sigaction sigact;
    sigact.sa_sigaction = handler;
    sigemptyset(&sigact.sa_mask);
    sigact.sa_flags = SA_SIGINFO;
    sigaction(SIGUSR1, &sigact, NULL);

    ambulance1 = fork();
    if(ambulance1 > 0) {
        ambulance2 = fork();
        if(ambulance2 > 0) { // parent
            int status;
            waitpid(ambulance1, &status, 0);
            waitpid(ambulance2, &status, 0);

            printf("HQ went home!\n");
        }
    }

    if(ambulance1 == 0 || ambulance2 == 0) {
        union sigval signalValueInt;
        signalValueInt.sival_int = SIG_INT;

        sigqueue(getppid(), SIGUSR1, signalValueInt);

        printf("Ambulance[%d] ended.\n", getpid());
    }

    return 0;
}

What happens is: sometimes the second ambulance's sigqueue(getppid(), SIGUSR1, signalValueInt); doesn't get received, and the output is something like the following:

  1. Ambulance[20050] ended. // main() prints out this
  2. Ambulance[20051] ended. // main() prints out this
  3. // handler() prints out this with write() ONLY ONCE!
  4. HQ went home! // main() prints out this

I know that the signal is lost, because the two signals arrived too quickly after one another, and the operating sys. thinks it's an error-duplicate, so it gets ignored.

My question is:

Is there a way to tell the operating system not to do that?

I wouldn't like to use two different signals (ex.: SIGUSR1 and SIGUSR2) for the same purpose, and I also wouldn't like to use delay in one of the child process.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
vapandris
  • 27
  • 5

1 Answers1

0

The answer is in the manual page of signal(7). But breafly: if a standard signal arrives (like SIGUSR1) while the handler is running it will get ignored by the operating system. if a real-time signal arrives (like SIGRTMIN) while handler is running it will be processed after the handler is done running.

vapandris
  • 27
  • 5
  • This is inaccurate — [normal behavior](https://pubs.opengroup.org/onlinepubs/9699919799/functions/sigaction.html), without specifying SA_NODEFER, is that the signal will be blocked and held pending during handler execution, _not_ that it will be discarded. – pilcrow Dec 03 '20 at 19:55
  • @pilcrow That's not a complete statement. Per [**2.4 Signal Concepts**](https://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04): "If a subsequent occurrence of a pending signal is generated, it is implementation-defined as to whether the signal is delivered or accepted more than once in circumstances other than those in which queuing is required." NB the exact definition of pending "During the time between the generation of a signal and its delivery or acceptance, the signal is said to be "pending"" `SA_NODEFER` only impacts signal processing upon delivery. – Andrew Henle Dec 03 '20 at 21:29
  • Moral of the story: it's a really **bad idea** to write code that depends on exactly how many non-RT signals will be delivered. – Andrew Henle Dec 03 '20 at 21:32
  • 1
    @AndrewHenle, yes. It seemed to me this answer was talking about what happens "when a handler is running," which is the context of my comment. Queueing is not germane here: there are only two signals, and the first has been delivered. If the second is delivered "when [the] handler is running," then in the OP's code it is necessarily held pending. Now, this answer above may of course be wrong about the _timing_ as both might arrive pending before the handler is invoked, in which case the implementation defined queueing does come into play — but that's not what OP's answer above considered. – pilcrow Dec 03 '20 at 22:02