0

I have two different applications that have to work together. process 1 acts as a time source and process 2 performs acts according to the time source provided by process 1. I need to run multiple copies of process 2. The goal is to have one time source process signaling 5-10 other processes at the same time, so they they all perform their work simultaneously.

Currently, I have this implemented in the following way:

  1. The time source program starts, created a shared memory segment, creates an empty list of PIDs, then unlocks the segment.
  2. Each time one of the client programs start, they go the shared memory, add their own pid to the list, and then unlock it.
  3. The time source has a timer than goes off every 10ms. Every time the timer goes off, he cycles through the pid list, and sends a signal to everyone in it back to back.

This approach mostly works well, but I am hoping that it can be improved. I currently have two sticking points:

  1. Very rarely, the signal delivered to one of the client processes will be skewed by ~2 milliseconds or so. The end result is: | 12ms | 8ms | instead of | 10ms | 10ms |.
  2. The second issue is that all of the client programs are actually multithreaded and doing a lot of work (though only the original thread is responsible for handling the signal). If I have multiple client processes running at once, the delivery of the signals gets more sporatic and skewed, as if they are more difficult to deliver when the system is more taxed (even if the client process is ready and waiting for the interrupt).

What other approaches should I consider for doing this type of thing? I have considered the following (all in the shared memory segment):

  • Using volatile uin8_t flags (set by time source process, cleared by client).
  • Using semaphores, but if the time source process is running, and the client hasn't started yet, how do I keep from incrementing the semaphore over and over?
  • Condition variables, though there doesn't seem to be a solution that can be used in shared memory between unrelated processes.
user1764386
  • 5,311
  • 9
  • 29
  • 42
  • 2
    Regarding your last option (condition variables in SHM): http://stackoverflow.com/q/2782883/694576 – alk Apr 22 '16 at 18:27
  • 2
    You could try putting all the processes into the same process group and sending the signal to the group. – Mark Plotnick Apr 22 '16 at 19:02
  • I think you're confusing the time it takes to make a process ready to run with the time to actually get a process executing once it is ready to run. You can easily use a futex to make any number of processes ready to run at the same instant. Condition variables in shared memory will work too. – David Schwartz Apr 22 '16 at 23:05

1 Answers1

1

Even if a process is in waiting state, ready to receive a signal, does not mean that the kernel is going to schedule the task yet, and especially when there are most tasks in running states than there are available CPU cores.

Adjusting the priority (or nice level) or processes and threads, will influence the kernel scheduler. ¨ You can also play around with the different schedulers that are available in your kernel, and their parameters.

Stian Skjelstad
  • 2,277
  • 1
  • 9
  • 19
  • Earlier today I started adjusting the nice level of the processes and that completely eliminated the issue. I can similarly fix the problem by selectively assigning the processes to different cores (or even by reserving the cores ahead of time). I will look into different scheduling options, but your feedback matched my experience. – user1764386 Apr 23 '16 at 02:13