176

I have a command CMD called from my main bourne shell script that takes forever.

I want to modify the script as follows:

  1. Run the command CMD in parallel as a background process (CMD &).
  2. In the main script, have a loop to monitor the spawned command every few seconds. The loop also echoes some messages to stdout indicating progress of the script.
  3. Exit the loop when the spawned command terminates.
  4. Capture and report the exit code of the spawned process.

Can someone give me pointers to accomplish this?

ib.
  • 27,830
  • 11
  • 80
  • 100
bob
  • 1,761
  • 2
  • 11
  • 3
  • See also: [How to wait in bash for several subprocesses to finish, and return exit code !=0 when any subprocess ends with code !=0?](https://stackoverflow.com/q/356100/4561887) – Gabriel Staples Feb 16 '22 at 05:49

14 Answers14

173

1: In bash, $! holds the PID of the last background process that was executed. That will tell you what process to monitor, anyway.

4: wait <n> waits until the process with PID <n> is complete (it will block until the process completes, so you might not want to call this until you are sure the process is done), and then returns the exit code of the completed process.

2, 3: ps or ps | grep " $! " can tell you whether the process is still running. It is up to you how to understand the output and decide how close it is to finishing. (ps | grep isn't idiot-proof. If you have time you can come up with a more robust way to tell whether the process is still running).

Here's a skeleton script:

# simulate a long process that will have an identifiable exit code
(sleep 15 ; /bin/false) &
my_pid=$!

while   ps | grep " $my_pid "     # might also need  | grep -v grep  here
do
    echo $my_pid is still in the ps output. Must still be running.
    sleep 3
done

echo Oh, it looks like the process is done.
wait $my_pid
# The variable $? always holds the exit code of the last command to finish.
# Here it holds the exit code of $my_pid, since wait exits with that code. 
my_status=$?
echo The exit status of the process was $my_status
mob
  • 117,087
  • 18
  • 149
  • 283
  • 18
    `ps -p $my_pid -o pid=` neither `grep` is needed. – Dennis Williamson Oct 15 '09 at 06:43
  • 1
    @Dennis Williamson `ps` has many flavors. Your call doesn't work for me but `ps -p$my_pid` does. Your larger point that `grep` isn't necessary is correct. – mob Oct 15 '09 at 16:01
  • Hmmm .. actually I can't figure out a good way to avoid grep on Cygwin. `ps -p$pid` always has exit status of 0 whether $pid exists or not. I could say something like `while [ 'ps -p$pid | wc -l' \> 1 ]` but that's hardly an improvement ... – mob Oct 15 '09 at 17:06
  • 67
    `kill -0 $!` is a better way of telling whether a process is still running. It doesn't actually send any signal, only checks that the process is alive, using a shell built-in instead of external processes. As `man 2 kill` says, "If _sig_ is 0, then no signal is sent, but error checking is still performed; this can be used to check for the existence of a process ID or process group ID." – ephemient Oct 17 '09 at 00:18
  • 19
    @ephemient `kill -0` will return non-zero if you don't have permission to send signals to a process that is running. Unfortunately it returns `1` in both this case and the case where the process doesn't exist. This makes it useful *unless* you don't own the process - which can be the case even for processes you created if a tool like `sudo` is involved or if they're setuid (and possibly drop privs). – Craig Ringer May 27 '13 at 07:39
  • 1
    if you only need to monitor a process and not a specific PID, pgrep is also useful: pgrep -x [process] – nmax Jan 22 '15 at 21:11
  • 17
    `wait` doesn't return the exit code in the variable `$?`. It just returns the exit code and `$?` is the exit code of the latest foreground program. – DifferentPseudonym Jun 07 '15 at 10:52
  • 12
    For the many people voting up the `kill -0`. Here is a peer reviewed reference from SO that shows CraigRinger's comment is legit regarding: [`kill -0` will return non-zero for running processes... but `ps -p` will always return 0 for any running process](http://stackoverflow.com/a/15774758/52074). – Trevor Boyd Smith Jun 18 '15 at 17:47
  • @mob The current while condition does not work here (in Ubuntu linux). Can't you use `-o pid=` for ps (to disable the header)? Then `while [ -n "$(ps -o pid= -p$my_pid)" ]` would work; here it works also without `-p`. `while [ $(ps -p$pid | wc -l) \> 1 ]` works, too, but apparently you did not like it. – jarno Jan 28 '16 at 16:25
  • There are many different flavors of `ps`. You may have to experiment with options that work for your system and your configuration. – mob Jan 28 '16 at 16:48
79

This is how I solved it when I had a similar need:

# Some function that takes a long time to process
longprocess() {
        # Sleep up to 14 seconds
        sleep $((RANDOM % 15))
        # Randomly exit with 0 or 1
        exit $((RANDOM % 2))
}

pids=""
# Run five concurrent processes
for i in {1..5}; do
        ( longprocess ) &
        # store PID of process
        pids+=" $!"
done

# Wait for all processes to finish, will take max 14s
# as it waits in order of launch, not order of finishing
for p in $pids; do
        if wait $p; then
                echo "Process $p success"
        else
                echo "Process $p fail"
        fi
done
Community
  • 1
  • 1
Bjorn
  • 1,090
  • 8
  • 12
  • 6
    This solution doesn't satisfy requirement #2: a monitoring loop per background process. `wait`s cause script to wait until the very end of (each) process. – Dima Korobskiy Dec 19 '17 at 19:48
  • Simple and nice approach.. have been searching for this solution quite sometime.. – Santosh Kumar Arjunan Feb 18 '18 at 10:43
  • This doesn't work .. or doesn't do what you want: it doesn't check backgrounded processes exit statuses? – conny Nov 23 '18 at 06:36
  • 1
    @conny, yes it does check the exit status of the backgrounded processes. The "wait" command returns the exit status of the processes. In the example shown here it is demonstrated by the "Process $p success/fail". – Bjorn Dec 05 '18 at 13:13
  • Regardless of whether or not it answers the original question, this helped me write a script that checks the parallelized background processes for completion and failures, so thank you! For anyone interested, I've saved the resulting test script as a gist here: https://gist.github.com/therightstuff/f4cc70db21d8e21d7277a414dbefa0a6 – therightstuff Dec 05 '22 at 13:03
  • @Bjorn, definitely the best answer. no other solution worked for me that could grab the exit codes of my backgrounded function calls. Recommend highlighting the use of a deliberate subshell call (line 12) around a function block call otherwise I got a bunch of `wait: pid #### is not a child of this shell` due to background process cleanup. THANK YOU! – codejedi365 Feb 21 '23 at 19:41
20

The pid of a backgrounded child process is stored in $!. You can store all child processes' pids into an array, e.g. PIDS[].

wait [-n] [jobspec or pid …]

Wait until the child process specified by each process ID pid or job specification jobspec exits and return the exit status of the last command waited for. If a job spec is given, all processes in the job are waited for. If no arguments are given, all currently active child processes are waited for, and the return status is zero. If the -n option is supplied, wait waits for any job to terminate and returns its exit status. If neither jobspec nor pid specifies an active child process of the shell, the return status is 127.

Use wait command you can wait for all child processes finish, meanwhile you can get exit status of each child processes via $? and store status into STATUS[]. Then you can do something depending by status.

I have tried the following 2 solutions and they run well. solution01 is more concise, while solution02 is a little complicated.

solution01

#!/bin/bash

# start 3 child processes concurrently, and store each pid into array PIDS[].
process=(a.sh b.sh c.sh)
for app in ${process[@]}; do
  ./${app} &
  PIDS+=($!)
done

# wait for all processes to finish, and store each process's exit code into array STATUS[].
for pid in ${PIDS[@]}; do
  echo "pid=${pid}"
  wait ${pid}
  STATUS+=($?)
done

# after all processed finish, check their exit codes in STATUS[].
i=0
for st in ${STATUS[@]}; do
  if [[ ${st} -ne 0 ]]; then
    echo "$i failed"
  else
    echo "$i finish"
  fi
  ((i+=1))
done

solution02

#!/bin/bash

# start 3 child processes concurrently, and store each pid into array PIDS[].
i=0
process=(a.sh b.sh c.sh)
for app in ${process[@]}; do
  ./${app} &
  pid=$!
  PIDS[$i]=${pid}
  ((i+=1))
done

# wait for all processes to finish, and store each process's exit code into array STATUS[].
i=0
for pid in ${PIDS[@]}; do
  echo "pid=${pid}"
  wait ${pid}
  STATUS[$i]=$?
  ((i+=1))
done

# after all processed finish, check their exit codes in STATUS[].
i=0
for st in ${STATUS[@]}; do
  if [[ ${st} -ne 0 ]]; then
    echo "$i failed"
  else
    echo "$i finish"
  fi
  ((i+=1))
done
Terry
  • 700
  • 2
  • 8
  • 17
  • I have tried and proved it runs well. You can read my explaination in code. – Terry Sep 14 '17 at 07:44
  • 2
    Please read "[How do I write a good answer?](https://stackoverflow.com/help/how-to-answer)" where you'll find the following info: **... try to mention any limitations, assumptions or simplifications in your answer. Brevity is acceptable, but fuller explanations are better.** You answer is therefore acceptable but you have much better chances of getting upvotes if you can elaborate on the problem and your solution. :-) – Noel Widmer Sep 14 '17 at 08:05
  • 3
    `pid=$!; PIDS[$i]=${pid}; ((i+=1))` can be written more simply as `PIDS+=($!)` which simply appends to the array without having to use a separate variable for indexing or the pid itself. The same thing applies to the `STATUS` array. – codeforester May 05 '18 at 20:37
  • 1
    @codeforester, thank you for your sugesstion, I have modifed my inital code into solution01, it looks more concise. – Terry May 13 '18 at 09:49
  • The same thing applies to other places where you are adding things into an array. – codeforester May 13 '18 at 15:39
  • @Wernfried Domscheit 1. If b.sh or c.sh have been finished before a.sh, wait [pid of a.sh] still running. until wait [pid of a.sh] finishs, the program will run next step. 2. In my linux, it doesn't happen that wait [pid of b.sh] will return "-bash: wait: pid xxx is not a child of this shell", could you give me some detail on how to reproduce this case ? thank you – Terry Oct 24 '18 at 03:40
  • It was my mistake, I tried `wait [random number]` instead of an existing PID – Wernfried Domscheit Oct 24 '18 at 05:31
11
#/bin/bash

#pgm to monitor
tail -f /var/log/messages >> /tmp/log&
# background cmd pid
pid=$!
# loop to monitor running background cmd
while :
do
    ps ax | grep $pid | grep -v grep
    ret=$?
    if test "$ret" != "0"
    then
        echo "Monitored pid ended"
        break
    fi
    sleep 5

done

wait $pid
echo $?
funroll
  • 35,925
  • 7
  • 54
  • 59
Abu Aqil
  • 804
  • 1
  • 7
  • 12
  • 2
    Here's a trick to avoid the `grep -v`. You can limit the search to the beginning of the line: `grep '^'$pid` Plus, you can do `ps p $pid -o pid=`, anyway. Also, `tail -f` isn't going to end until you kill it so I don't think it's a very good way to demo this (at least without pointing that out). You might want to redirect the output of your `ps` command to `/dev/null` or it'll go to the screen at every iteration. Your `exit` causes the `wait` to be skipped - it should probably be a `break`. But aren't the `while`/`ps` and the `wait` redundant? – Dennis Williamson Oct 15 '09 at 06:40
  • 5
    Why does everybody forget about `kill -0 $pid`? It doesn't actually send any signal, only checks that the process is alive, using a shell built-in instead of external processes. – ephemient Oct 17 '09 at 00:17
  • 3
    Because you can only kill a process you own: `bash: kill: (1) - Operation not permitted` – curious_prism May 02 '13 at 03:16
  • 2
    The loop is redundant. Just wait. Less code => less edge cases. – Brais Gabin Jan 23 '16 at 00:17
  • 1
    @Brais Gabin The monitoring loop is requirement #2 of the question – Dima Korobskiy Dec 19 '17 at 19:50
11

As I see almost all answers use external utilities (mostly ps) to poll the state of the background process. There is a more unixesh solution, catching the SIGCHLD signal. In the signal handler it has to be checked which child process was stopped. It can be done by kill -0 <PID> built-in (universal) or checking the existence of /proc/<PID> directory (Linux specific) or using the jobs built-in ( specific. jobs -l also reports the pid. In this case the 3rd field of the output can be Stopped|Running|Done|Exit . ).

Here is my example.

The launched process is called loop.sh. It accepts -x or a number as an argument. For -x is exits with exit code 1. For a number it waits num*5 seconds. In every 5 seconds it prints its PID.

The launcher process is called launch.sh:

#!/bin/bash

handle_chld() {
    local tmp=()
    for((i=0;i<${#pids[@]};++i)); do
        if [ ! -d /proc/${pids[i]} ]; then
            wait ${pids[i]}
            echo "Stopped ${pids[i]}; exit code: $?"
        else tmp+=(${pids[i]})
        fi
    done
    pids=(${tmp[@]})
}

set -o monitor
trap "handle_chld" CHLD

# Start background processes
./loop.sh 3 &
pids+=($!)
./loop.sh 2 &
pids+=($!)
./loop.sh -x &
pids+=($!)

# Wait until all background processes are stopped
while [ ${#pids[@]} -gt 0 ]; do echo "WAITING FOR: ${pids[@]}"; sleep 2; done
echo STOPPED

For more explanation see: Starting a process from bash script failed

Community
  • 1
  • 1
TrueY
  • 7,360
  • 1
  • 41
  • 46
5

I would change your approach slightly. Rather than checking every few seconds if the command is still alive and reporting a message, have another process that reports every few seconds that the command is still running and then kill that process when the command finishes. For example:

#!/bin/sh

cmd() { sleep 5; exit 24; }

cmd &   # Run the long running process
pid=$!  # Record the pid

# Spawn a process that coninually reports that the command is still running
while echo "$(date): $pid is still running"; do sleep 1; done &
echoer=$!

# Set a trap to kill the reporter when the process finishes
trap 'kill $echoer' 0

# Wait for the process to finish
if wait $pid; then
    echo "cmd succeeded"
else
    echo "cmd FAILED!! (returned $?)"
fi
William Pursell
  • 204,365
  • 48
  • 270
  • 300
  • great template, thanks for sharing! I believe that instead of trap, we can also do `while kill -0 $pid 2> /dev/null; do X; done`, hope it's useful for someone else in the future who reads this message ;) – punkbit May 23 '19 at 13:16
3

Our team had the same need with a remote SSH-executed script which was timing out after 25 minutes of inactivity. Here is a solution with the monitoring loop checking the background process every second, but printing only every 10 minutes to suppress an inactivity timeout.

long_running.sh & 
pid=$!

# Wait on a background job completion. Query status every 10 minutes.
declare -i elapsed=0
# `ps -p ${pid}` works on macOS and CentOS. On both OSes `ps ${pid}` works as well.
while ps -p ${pid} >/dev/null; do
  sleep 1
  if ((++elapsed % 600 == 0)); then
    echo "Waiting for the completion of the main script. $((elapsed / 60))m and counting ..."
  fi
done

# Return the exit code of the terminated background process. This works in Bash 4.4 despite what Bash docs say:
# "If neither jobspec nor pid specifies an active child process of the shell, the return status is 127."
wait ${pid}
Dima Korobskiy
  • 1,479
  • 16
  • 26
2

Another solution is to monitor processes via the proc filesystem (safer than ps/grep combo); when you start a process it has a corresponding folder in /proc/$pid, so the solution could be

#!/bin/bash
....
doSomething &
local pid=$!
while [ -d /proc/$pid ]; do # While directory exists, the process is running
    doSomethingElse
    ....
else # when directory is removed from /proc, process has ended
    wait $pid
    local exit_status=$?
done
....

Now you can use the $exit_status variable however you like.

Iskren
  • 1,301
  • 10
  • 15
1

A simple example, similar to the solutions above. This doesn't require monitoring any process output. The next example uses tail to follow output.

$ echo '#!/bin/bash' > tmp.sh
$ echo 'sleep 30; exit 5' >> tmp.sh
$ chmod +x tmp.sh
$ ./tmp.sh &
[1] 7454
$ pid=$!
$ wait $pid
[1]+  Exit 5                  ./tmp.sh
$ echo $?
5

Use tail to follow process output and quit when the process is complete.

$ echo '#!/bin/bash' > tmp.sh
$ echo 'i=0; while let "$i < 10"; do sleep 5; echo "$i"; let i=$i+1; done; exit 5;' >> tmp.sh
$ chmod +x tmp.sh
$ ./tmp.sh
0
1
2
^C
$ ./tmp.sh > /tmp/tmp.log 2>&1 &
[1] 7673
$ pid=$!
$ tail -f --pid $pid /tmp/tmp.log
0
1
2
3
4
5
6
7
8
9
[1]+  Exit 5                  ./tmp.sh > /tmp/tmp.log 2>&1
$ wait $pid
$ echo $?
5
Darren Weber
  • 1,537
  • 19
  • 20
1

With this method, your script doesnt have to wait for the background process, you will only have to monitor a temporary file for the exit status.

FUNCmyCmd() { sleep 3;return 6; };

export retFile=$(mktemp); 
FUNCexecAndWait() { FUNCmyCmd;echo $? >$retFile; }; 
FUNCexecAndWait&

now, your script can do anything else while you just have to keep monitoring the contents of retFile (it can also contain any other information you want like the exit time).

PS.: btw, I coded thinking in bash

Aquarius Power
  • 3,729
  • 5
  • 32
  • 67
1

My solution was to use an anonymous pipe to pass the status to a monitoring loop. There are no temporary files used to exchange status so nothing to cleanup. If you were uncertain about the number of background jobs the break condition could be [ -z "$(jobs -p)" ].

#!/bin/bash

exec 3<> <(:)

{ sleep 15 ; echo "sleep/exit $?" >&3 ; } &

while read -u 3 -t 1 -r STAT CODE || STAT="timeout" ; do
    echo "stat: ${STAT}; code: ${CODE}"
    if [ "${STAT}" = "sleep/exit" ] ; then
        break
    fi
done
1

how about ...

# run your stuff
unset PID
for process in one two three four
do
    ( sleep $((RANDOM%20)); echo hello from process $process; exit $((RANDOM%3)); ) & 2>&1
    PID+=($!)
done

# (optional) report on the status of that stuff as it exits
for pid in "${PID[@]}"
do
    ( wait "$pid"; echo "process $pid complemted with exit status $?") &
done

# (optional) while we wait, monitor that stuff
while ps --pid "${PID[*]}" --ppid "${PID[*]}" --format pid,ppid,command,pcpu
do
    sleep 5
done | xargs -i date '+%x %X {}'

# return non-zero if any are non zero
SUCCESS=0
for pid in "${PID[@]}"
do
    wait "$pid" && ((SUCCESS++)) && echo "$pid OK" || echo "$pid returned $?"
done

echo "success for $SUCCESS out of ${#PID} jobs"
exit $(( ${#PID} - SUCCESS ))
ksh93
  • 11
  • 1
0

This may be extending beyond your question, however if you're concerned about the length of time processes are running for, you may be interested in checking the status of running background processes after an interval of time. It's easy enough to check which child PIDs are still running using pgrep -P $$, however I came up with the following solution to check the exit status of those PIDs that have already expired:

cmd1() { sleep 5; exit 24; }
cmd2() { sleep 10; exit 0; }

pids=()
cmd1 & pids+=("$!")
cmd2 & pids+=("$!")

lasttimeout=0
for timeout in 2 7 11; do
  echo -n "interval-$timeout: "
  sleep $((timeout-lasttimeout))

  # you can only wait on a pid once
  remainingpids=()
  for pid in ${pids[*]}; do
     if ! ps -p $pid >/dev/null ; then
        wait $pid
        echo -n "pid-$pid:exited($?); "
     else
        echo -n "pid-$pid:running; "
        remainingpids+=("$pid")
     fi
  done
  pids=( ${remainingpids[*]} )

  lasttimeout=$timeout
  echo
done

which outputs:

interval-2: pid-28083:running; pid-28084:running; 
interval-7: pid-28083:exited(24); pid-28084:running; 
interval-11: pid-28084:exited(0); 

Note: You could change $pids to a string variable rather than array to simplify things if you like.

curious_prism
  • 349
  • 1
  • 5
0

If you just want to run a fixed number of commands in parallel, and ensure that errors are not ignored you can do this very simple option:

#!/bin/bash

set -e

python3 -c "import time; import sys; time.sleep(1); sys.exit(1)" &
python3 -c "import time; import sys; time.sleep(3); sys.exit(0)" &

wait -n
wait -n

wait -n waits for the next job to complete and returns its exit code. Because we used set -e it will exit the whole script with failure.

Note that it will still leave the other job running in the background. If you don't want that you can do something like this:

{ wait -n && wait -n ; } || { wait; exit 1; }

I think if you need something much more complex you should not be using shell scripts. Do it in Python or Deno.

Timmmm
  • 88,195
  • 71
  • 364
  • 509