If you want to run commands in parallel in a controlled manner (i.e. (1) limit the number of parallel commands, (2) track their return statuses and (3) ensure that new commands are started once their predecessors finish, until all commands have run), you can reuse a simple harness, copied from my other answer here.
Just plug in your preferences, replace do_something_and_maybe_fail
with the programs you want to run (which you can iterate through by modifying the place where pname
is generated (some_program_{a..f}{0..5}
) and you’re good to go.
The harness is runnable as-is. Its processes randomly sleep and randomly fail and there are 20 execution slots (MAX_PARALLELISM
) for 36 “commands” (some_program_{a..f}{0..5}
), so, quite obviously, a few commands will need to wait for other ones to finish (so that at most 20 of them run in parallel).
#!/bin/bash
set -euo pipefail
declare -ir MAX_PARALLELISM=20 # pick a limit
declare -i pid
declare -a pids=()
do_something_and_maybe_fail() {
sleep $((RANDOM % 10))
return $((RANDOM % 2 * 5))
}
for pname in some_program_{a..f}{0..5}; do # 36 items
if ((${#pids[@]} >= MAX_PARALLELISM)); then
wait -p pid -n \
&& echo "${pids[pid]} succeeded" 1>&2 \
|| echo "${pids[pid]} failed with ${?}" 1>&2
unset 'pids[pid]'
fi
do_something_and_maybe_fail & # forking here
pids[$!]="${pname}"
echo "${#pids[@]} running" 1>&2
done
for pid in "${!pids[@]}"; do
wait -n "$((pid))" \
&& echo "${pids[pid]} succeeded" 1>&2 \
|| echo "${pids[pid]} failed with ${?}" 1>&2
done