1

In my script I am starting multiple background processes and one of them is

(
  tail -qF -n 0 logs/nginx/*.log
) &
processes[logs]=$!

for process in "${!processes[@]}"; do
  wait "${processes[$process]}"
done

When sending SIGTERM signal all processes ends only tail is left running. After some testing I came up to the solution to send tail to background and its working.

(
  tail -qF -n 0 logs/nginx/*.log &
) &
processes[logs]=$!

for process in "${!processes[@]}"; do
  wait "${processes[$process]}"
done

Can someone explain to me what is happening when I send tail to background in subshell so it ends when SIGTERM arrives?

Nenad
  • 231
  • 3
  • 15
  • Does this answer your question? [How do I kill background processes / jobs when my shell script exits?](https://stackoverflow.com/questions/360201/how-do-i-kill-background-processes-jobs-when-my-shell-script-exits) – pjh May 31 '22 at 10:18
  • Normally killing a process won't kill its descendant processes. One option in your case may be to set up an EXIT trap that kills remaining subprocesses (`trap 'kill $(jobs -p)' EXIT`). – pjh May 31 '22 at 10:21
  • My processes exit on their own, I dont need to kill them. Question is why does it work in second example and not in first one. – Nenad May 31 '22 at 10:24
  • The parentheses force `tail` to run in a subshell; you then kill only the subshell. The simplest fix by far is to remove the parentheses. – tripleee May 31 '22 at 11:20
  • @tripleee, I thought the same as you, but it appears that I was wrong. Running `( tail ... ) & echo $!` prints the PID of the `tail` process. The parentheses seem to be optimized away. – pjh May 31 '22 at 11:42
  • I can't reproduce the described behaviour. With both the first and second example code the `tail` process continues to run if the process running the code is killed with its PID (e.g. `kill 6789`). Also in both cases the `tail` process is killed if the process running the code is killed with its job number (e.g. `kill %1`). That is expected because killing with a job number kills the whole process group associated with the job. – pjh May 31 '22 at 14:17
  • If you still need help, I suggest you provide two complete (small) programs that demonstrate the behaviour. Provide details of how you are running them, how you are killing them, and how you are checking that the `tail` process is still running. – pjh May 31 '22 at 14:21
  • I was thinking that my tail was killed, by putting it in background my script can exit as tail is not blocking it. – Nenad Jun 01 '22 at 06:47

1 Answers1

0

My tail was not killed in example I provided, it was only sent to background and that allowed script to exit.

I have attached tail to my server, so after my server dies tail also dies. Now it behaves how I want.

tail -qF -n 0 --pid=${process_list[server]} logs/nginx/*.log
Nenad
  • 231
  • 3
  • 15