Summary
I tried to code a Monte Carlo simulation that forks into up to number of cores processes. After a certain amount of time the parent sends SIGUSR1 to all children which then should stop calculating an send results back to the parent.
When I compile without any optimization (clang thread_stop.c
) the behavior is as expected. When I try to optimize the code (clang -O1 thread_stop.c
) the signals are caught, but the children do not stop.
Code
I cut the code down to the smallest piece which behaves the same:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
#include <sys/types.h> /* pid_t */
#include <sys/mman.h> /* mmap */
#define MAX 1 /* Max time to run */
static int a=0; /* int to be changed when signal arrives */
void sig_handler(int signo) {
if (signo == SIGUSR1){
a=1;
printf("signal caught\n");
}
}
int main(void){
int * comm;
pid_t pid;
/* map to allow child processes access same array */
comm = mmap(NULL, sizeof(int), PROT_READ | PROT_WRITE,
MAP_SHARED | MAP_ANONYMOUS, -1, 0);
*comm = 0;
pid=fork();
if(pid == 0){ /* child process */
signal(SIGUSR1, sig_handler); /* catch signal */
do {
/* do things */
} while(a == 0);
printf("Child exit(0)\n");
*comm = 2;
exit(0); /* exit for child process */
} /* if(pid == 0) - code below is parent only */
printf("Started child process, sleeping %d seconds\n", MAX);
sleep(MAX);
printf("Send signal to child\n");
kill(pid, SIGUSR1); /* send SIGUSR1 */
while(*comm != 2) usleep(10000);
printf("Child process ended\n");
/* clean up */
munmap(comm, sizeof(int));
return 0;
}
System
clang shows this on termux (clang 9.0.1) and lubuntu (clang 6.0.0-lubuntu2).