470

I am trying to write a .sh file that runs many programs simultaneously

I tried this

prog1 
prog2

But that runs prog1 then waits until prog1 ends and then starts prog2...

So how can I run them in parallel?

tripleee
  • 175,061
  • 34
  • 275
  • 318
Betamoo
  • 14,964
  • 25
  • 75
  • 109

19 Answers19

475

How about:

prog1 & prog2 && fg

This will:

  1. Start prog1.
  2. Send it to background, but keep printing its output.
  3. Start prog2, and keep it in foreground, so you can close it with ctrl-c.
  4. When you close prog2, you'll return to prog1's foreground, so you can also close it with ctrl-c.
Ory Band
  • 14,716
  • 14
  • 59
  • 66
  • 14
    Is there an easy way to terminate `prog1` when `prog2` terminates? Think of `node srv.js & cucumberjs` – jpsecher Nov 05 '15 at 16:31
  • 34
    Just tried this, and it didn't work as expected for me. However, a slight modification worked: `prog1 & prog2 ; fg` This was for running multiple ssh tunnels at once. Hope this helps someone. – jnadro52 Jan 20 '16 at 20:18
  • 2
    @jnadro52 your solution has the effect that if `prog2` fails to run immediately, you'll get back to having `prog1` in the foreground. If this is desirable, then it's ok. – Ory Band Jan 21 '16 at 09:29
  • 1
    @OryBand It is starting both process. When `ctrl+c` is pressed for the first time, it is stopping second program and exiting with error `line 7: fg: no job control ` – Chillar Anand Jun 06 '16 at 06:31
  • 3
    On SSH'ed shell If you execute a command like this, it will be tricky to kill prog1. Ctrl-c didn't work for me. Even killing the whole terminal left prog1 running. – mercury0114 Feb 21 '17 at 20:44
  • 28
    @jnadro52 A way to terminate both processes at once is `prog1 & prog2 && kill $!`. – zaboco Apr 25 '17 at 16:35
  • 1
    You can stop both commands with one `ctrl+c` with `prog1 & pid1="$!" ; prog2 ; kill $pid1` – greuze Nov 06 '17 at 14:40
  • 1
    This worked for me. This brings the prog1 to forward after i do cntl+c for first time. `prog1 & prog2 && fg; fg` – Srikant Nov 23 '18 at 08:33
  • 2
    Note that `fg` won't work in a shell script. You'll get " fg: no job control". This is because a script is not an interactive shell. – Jack Kinsella Jun 03 '21 at 08:01
441

To run multiple programs in parallel:

prog1 &
prog2 &

If you need your script to wait for the programs to finish, you can add:

wait

at the point where you want the script to wait for them.

psmears
  • 26,070
  • 4
  • 40
  • 48
  • 92
    Do not forget the `wait`! Yes, in bash you can wait for the script's child processes. – Dummy00001 Jun 10 '10 at 18:43
  • 7
    Another option is to use `nohup` to prevent the program from being killed when the shell hangs up. – Philipp Jul 24 '10 at 13:31
  • 1
    @liang: Yes, it will work with three or more programs too. – psmears Apr 19 '19 at 22:01
  • 1
    Maybe a silly question but what is if I want to run `prog1 | something & prog2 | another &`? I am pretty sure that it would not work. – Micha93 Feb 26 '21 at 14:07
  • 1
    @Micha93: it works just fine; why do you think it won't? – psmears Feb 26 '21 at 15:59
  • Is there a way to list down only the processes that are created/running from the current bash terminal? – Sajib Mar 01 '22 at 05:37
  • @Sajib: That might be worth asking as a full question, but the short answer is you can run `jobs` to see the processes put into the background from the current shell. – psmears Mar 01 '22 at 09:54
  • @psmears, How about the scenario when prog1 does not return the control of cursor, meaning in one terminal one program occupies so prog2 will require new terminal – Dev May 25 '22 at 13:20
  • @Dev: I'm not sure what you mean. Obviously you can't run more than one program that requires full control of the terminal at the same time in the same terminal - but that's not what's being asked in the question - it asks about running lots of programs simultaneously from the same script :) – psmears May 25 '22 at 13:23
  • @psmears Do both `prog1` and `prog2` run in background? – dustin2022 Jul 06 '22 at 08:28
  • @dustin2022: If they both have `&` after them then yes, they both run in the background. – psmears Jul 06 '22 at 08:31
  • Would be cool if somebody could show how it works with wait and prog1 and prog2 as parallel running. Not interactively, that can be achieved in many simpler ways like just running a second terminal ... i suppose wait magically tracks all running background processes from this script? How can i make sure my script aborts if one of the programs exit with != 0? – schwaller Jul 07 '23 at 14:17
248

If you want to be able to easily run and kill multiple process with ctrl-c, this is my favorite method: spawn multiple background processes in a (…) subshell, and trap SIGINT to execute kill 0, which will kill everything spawned in the subshell group:

(trap 'kill 0' SIGINT; prog1 & prog2 & prog3)

You can have complex process execution structures, and everything will close with a single ctrl-c (just make sure the last process is run in the foreground, i.e., don't include a & after prog1.3):

(trap 'kill 0' SIGINT; prog1.1 && prog1.2 & (prog2.1 | prog2.2 || prog2.3) & prog1.3)

If there is a chance the last command might exit early and you want to keep everything else running, add wait as the last command. In the following example, sleep 2 would have exited first, killing sleep 4 before it finished; adding wait allows both to run to completion:

(trap 'kill 0' SIGINT; sleep 4 & sleep 2 & wait)
Quinn Comendant
  • 9,686
  • 2
  • 32
  • 35
  • 25
    This is the best answer by far. – Nic May 28 '20 at 15:22
  • 3
    What's the `kill 0`? Is that PID 0 which is the subshell itself? – mpen Apr 07 '21 at 23:40
  • 6
    @mpen That's correct, the `kill` program interprets `0` as *“All processes in the current process group are signaled.”* The [man page](https://man7.org/linux/man-pages/man1/kill.1.html#ARGUMENTS) includes this description. – Quinn Comendant Apr 08 '21 at 03:49
  • Amazing, worked fine. This is a great example how useful subshell can be. – Ângelo Polotto Apr 13 '21 at 17:49
  • I had to use `trap 'kill 0' INT;` instead of `SIGINT` – forresthopkinsa Aug 09 '21 at 04:47
  • 1
    I use (trap 'kill 0' SIGINT EXIT; blabla ) insead, since it's more robust (when error occurs kill all daemon processes) – luochen1990 Nov 12 '21 at 07:00
  • Thanks! FYI: I need to `grep` output for each command, so I found I can simply use `(trap 'kill 0' SIGINT; prog1 & prog2) | grep XXX` instead of `(trap 'kill 0' SIGINT; (prog1 | grep XXX) & (prog2 | grep XXX))`. – Yun Wu Jan 14 '22 at 09:42
  • 2
    This only works if the last command (the foreground one) also exits last, otherwise the whole thing exits too early. – Ingo Bürk Mar 18 '22 at 11:40
  • 3
    `(trap 'kill 0' SIGINT; prog1 & prog2 & prog3 & wait)` helps ensure all programs finish – jakeonfire Jun 24 '22 at 20:07
  • This actually works on WSL2! – OldBuildingAndLoan Aug 09 '22 at 11:37
  • 1
    Ingo Bürk's comment seems correct. eg `(trap 'kill 0' SIGINT; (sleep 4; echo Hello) & (sleep 2 ; echo World))` results in: two seconds elapse, "World" prints and the remaining process is in the background (ie, you have an active shell prompt again), then two more seconds elapse and "Hello" prints. – orion elenzil Nov 15 '22 at 17:00
  • I've updated the answer to suggest using `wait` as the last command. Thanks to Ingo Bürk, orion elenzil, and jakeonfire for the suggestions. – Quinn Comendant Nov 20 '22 at 23:50
  • 1
    This is a terrific answer, but I suggest you move the updated `wait` version first, and make that the main solution. 99% of the time this is what users would expect. – Steven Spungin Nov 30 '22 at 13:04
138

You can use wait:

some_command &
P1=$!
other_command &
P2=$!
wait $P1 $P2

It assigns the background program PIDs to variables ($! is the last launched process' PID), then the wait command waits for them. It is nice because if you kill the script, it kills the processes too!

trusktr
  • 44,284
  • 53
  • 191
  • 263
  • 13
    [In my experience](https://i.imgur.com/BpPhYaP.png), killing wait doesn't also kill the other processes. – Quinn Comendant Aug 27 '18 at 05:47
  • 1
    If i am starting background processes in a loop how can i wait for every background process to complete before moving forward with the execution of the next set of commands. `#!/usr/bin/env bash ARRAY='cat bat rat' for ARR in $ARRAY do ./run_script1 $ARR & done P1=$! wait $P1 echo "INFO: Execution of all background processes in the for loop has completed.." ` – Yash Oct 22 '18 at 17:18
  • @Yash I think you can save the process IDs into an array, then call wait on the array. I think you have to use `${}` to interpolate it into a string list or similar. – trusktr Nov 01 '18 at 21:23
  • the best answer, and for me killing the script kills the processes too! macOS Catalina, zsh console – Michael Klishevich May 06 '20 at 16:07
  • 2
    Using `wait` fails to kill my second process. – frodo2975 Dec 28 '20 at 18:55
  • 1
    Fantastic answer. Is it possible also to capture exit codes still to abort if there is a failure? – openCivilisation Mar 02 '21 at 07:00
  • I have the same problem commented above, wait don't kill the second process after Ctrl+C command. – Ângelo Polotto Apr 13 '21 at 17:49
  • Killing wait won't kill the other processes, but killing the shell calling them will (unless they are nohup.). So as trusktr states, if you kill the shell script, the other processes will die too – tbrugere May 28 '21 at 13:44
95

With GNU Parallel http://www.gnu.org/software/parallel/ it is as easy as:

(echo prog1; echo prog2) | parallel

Or if you prefer:

parallel ::: prog1 prog2

Learn more:

Geremia
  • 4,745
  • 37
  • 43
Ole Tange
  • 31,768
  • 5
  • 86
  • 104
  • 10
    It is worth noting that there are different versions of `parallel` with different syntax. For example, on Debian derivatives the `moreutils` package contains a different command called `parallel` which behaves quite differently. – Joel Cross Nov 18 '16 at 14:24
  • 9
    is `parallel` better than using `&`? – Optimus Prime Dec 24 '18 at 11:15
  • 6
    @OptimusPrime It really depends. GNU Parallel introduces some overhead, but in return gives you much more control over the running jobs and output. If two jobs print at the same time, GNU Parallel will make sure the output is not mixed. – Ole Tange Dec 25 '18 at 02:44
  • 2
    @OptimusPrime `parallel` is better when there are more jobs than cores, in which case `&` would run more than one job per core at once. (cf. [pigeonhole principle](https://artofproblemsolving.com/wiki/index.php?title=Pigeonhole_Principle)) – Geremia Aug 26 '19 at 18:09
  • 1
    This is life altering .. – kargirwar Jul 12 '23 at 06:36
33

xargs -P <n> allows you to run <n> commands in parallel.

While -P is a nonstandard option, both the GNU (Linux) and macOS/BSD implementations support it.

The following example:

  • runs at most 3 commands in parallel at a time,
  • with additional commands starting only when a previously launched process terminates.
time xargs -P 3 -I {} sh -c 'eval "$1"' - {} <<'EOF'
sleep 1; echo 1
sleep 2; echo 2
sleep 3; echo 3
echo 4
EOF

The output looks something like:

1   # output from 1st command 
4   # output from *last* command, which started as soon as the count dropped below 3
2   # output from 2nd command
3   # output from 3rd command

real    0m3.012s
user    0m0.011s
sys 0m0.008s

The timing shows that the commands were run in parallel (the last command was launched only after the first of the original 3 terminated, but executed very quickly).

The xargs command itself won't return until all commands have finished, but you can execute it in the background by terminating it with control operator & and then using the wait builtin to wait for the entire xargs command to finish.

{
  xargs -P 3 -I {} sh -c 'eval "$1"' - {} <<'EOF'
sleep 1; echo 1
sleep 2; echo 2
sleep 3; echo 3
echo 4
EOF
} &

# Script execution continues here while `xargs` is running 
# in the background.
echo "Waiting for commands to finish..."

# Wait for `xargs` to finish, via special variable $!, which contains
# the PID of the most recently started background process.
wait $!

Note:

  • BSD/macOS xargs requires you to specify the count of commands to run in parallel explicitly, whereas GNU xargs allows you to specify -P 0 to run as many as possible in parallel.

  • Output from the processes run in parallel arrives as it is being generated, so it will be unpredictably interleaved.

    • GNU parallel, as mentioned in Ole's answer (does not come standard with most platforms), conveniently serializes (groups) the output on a per-process basis and offers many more advanced features.
mklement0
  • 382,024
  • 64
  • 607
  • 775
16

Here is a function I use in order to run at max n process in parallel (n=4 in the example):

max_children=4

function parallel {
  local time1=$(date +"%H:%M:%S")
  local time2=""

  # for the sake of the example, I'm using $2 as a description, you may be interested in other description
  echo "starting $2 ($time1)..."
  "$@" && time2=$(date +"%H:%M:%S") && echo "finishing $2 ($time1 -- $time2)..." &

  local my_pid=$$
  local children=$(ps -eo ppid | grep -w $my_pid | wc -w)
  children=$((children-1))
  if [[ $children -ge $max_children ]]; then
    wait -n
  fi
}

parallel sleep 5
parallel sleep 6
parallel sleep 7
parallel sleep 8
parallel sleep 9
wait

If max_children is set to the number of cores, this function will try to avoid idle cores.

arnaldocan
  • 407
  • 3
  • 8
  • 1
    Nice snippet, but I can't find the explanation of "wait -n" under my bash it says that is an invalid option . typo or do I missed something ? – Emmanuel Devaux Mar 06 '17 at 14:10
  • 3
    @EmmanuelDevaux: `wait -n` requires **`bash` 4.3+** and it changes the logic to waiting for _any_ of the specified / implied processes to terminate. – mklement0 May 22 '17 at 23:06
  • what if one of the task failed,then I want to end the scripts? – 52coder Nov 17 '18 at 03:19
  • @52coder you can adjust the function to capture a failed child, something like: "$@" && time2=$(date +"%H:%M:%S") && echo "finishing $2 ($time1 -- $time2)..." || error=1 &. Then test for error in the "if" part and abort the function if needed – arnaldocan Nov 18 '18 at 17:31
  • Thanks for the `wait -n` command. I think it would help [this nice answer to a related question](https://stackoverflow.com/a/880864/3780389) question. – teichert Jun 17 '22 at 18:38
15
#!/bin/bash
prog1 & 2> .errorprog1.log; prog2 & 2> .errorprog2.log

Redirect errors to separate logs.

Community
  • 1
  • 1
fermin
  • 635
  • 7
  • 25
  • 13
    You have to put the ampersands after the redirections and leave out the semicolon (the ampersand will also perform the function of a command separator): `prog1 2> .errorprog1.log & prog2 2> .errorprog2.log &` – Dennis Williamson Jun 09 '10 at 10:42
  • the semicolon execute both comands, you can test de bash to see it work well ;) Example: pwd & 2> .errorprog1.log; echo "wop" & 2> .errorprog2.log when you put & you put program in background and immediately execute next command. – fermin Jun 09 '10 at 22:49
  • 2
    It doesn't work - the errors do not get redirected to the file. Try with: `ls notthere1 & 2> .errorprog1.log; ls notthere2 & 2>.errorprog2.log`. The errors go to the console, and both error files are empty. As @Dennis Williamson says, `&` is a separator, like `;`, so (a) it needs to go at the end of the command (after any redirecton), and (b) you don't need the `;` at all :-) – psmears Dec 12 '10 at 20:38
9

This works beautifully for me (found here):

sh -c 'command1 & command2 & command3 & wait'

It outputs all the logs of each command intermingled (which is what I wanted), and all are killed with ctrl+c.

joe
  • 3,752
  • 1
  • 32
  • 41
8

There is a very useful program that calls nohup.

nohup - run a command immune to hangups, with output to a non-tty
Yun
  • 3,056
  • 6
  • 9
  • 28
3h4x
  • 936
  • 8
  • 14
  • 5
    `nohup` by itself doesn't run anything in the background, and using `nohup` is not a requirement or prerequisite for running tasks in the background. They are often useful together but as such, this doesn't answer the question. – tripleee Jan 24 '18 at 15:32
7

You can try ppss (abandoned). ppss is rather powerful - you can even create a mini-cluster. xargs -P can also be useful if you've got a batch of embarrassingly parallel processing to do.

ljt
  • 103
  • 1
  • 7
7

I had a similar situation recently where I needed to run multiple programs at the same time, redirect their outputs to separated log files and wait for them to finish and I ended up with something like that:

#!/bin/bash

# Add the full path processes to run to the array
PROCESSES_TO_RUN=("/home/joao/Code/test/prog_1/prog1" \
                  "/home/joao/Code/test/prog_2/prog2")
# You can keep adding processes to the array...

for i in ${PROCESSES_TO_RUN[@]}; do
    ${i%/*}/./${i##*/} > ${i}.log 2>&1 &
    # ${i%/*} -> Get folder name until the /
    # ${i##*/} -> Get the filename after the /
done

# Wait for the processes to finish
wait

Source: http://joaoperibeiro.com/execute-multiple-programs-and-redirect-their-outputs-linux/

Joaopcribeiro
  • 91
  • 2
  • 6
4

Process Spawning Manager

Sure, technically these are processes, and this program should really be called a process spawning manager, but this is only due to the way that BASH works when it forks using the ampersand, it uses the fork() or perhaps clone() system call which clones into a separate memory space, rather than something like pthread_create() which would share memory. If BASH supported the latter, each "sequence of execution" would operate just the same and could be termed to be traditional threads whilst gaining a more efficient memory footprint. Functionally however it works the same, though a bit more difficult since GLOBAL variables are not available in each worker clone hence the use of the inter-process communication file and the rudimentary flock semaphore to manage critical sections. Forking from BASH of course is the basic answer here but I feel as if people know that but are really looking to manage what is spawned rather than just fork it and forget it. This demonstrates a way to manage up to 200 instances of forked processes all accessing a single resource. Clearly this is overkill but I enjoyed writing it so I kept on. Increase the size of your terminal accordingly. I hope you find this useful.

ME=$(basename $0)
IPC="/tmp/$ME.ipc"      #interprocess communication file (global thread accounting stats)
DBG=/tmp/$ME.log
echo 0 > $IPC           #initalize counter
F1=thread
SPAWNED=0
COMPLETE=0
SPAWN=1000              #number of jobs to process
SPEEDFACTOR=1           #dynamically compensates for execution time
THREADLIMIT=50          #maximum concurrent threads
TPS=1                   #threads per second delay
THREADCOUNT=0           #number of running threads
SCALE="scale=5"         #controls bc's precision
START=$(date +%s)       #whence we began
MAXTHREADDUR=6         #maximum thread life span - demo mode

LOWER=$[$THREADLIMIT*100*90/10000]   #90% worker utilization threshold
UPPER=$[$THREADLIMIT*100*95/10000]   #95% worker utilization threshold
DELTA=10                             #initial percent speed change

threadspeed()        #dynamically adjust spawn rate based on worker utilization
{
   #vaguely assumes thread execution average will be consistent
   THREADCOUNT=$(threadcount)
   if [ $THREADCOUNT -ge $LOWER ] && [ $THREADCOUNT -le $UPPER ] ;then
      echo SPEED HOLD >> $DBG
      return
   elif [ $THREADCOUNT -lt $LOWER ] ;then
      #if maxthread is free speed up
      SPEEDFACTOR=$(echo "$SCALE;$SPEEDFACTOR*(1-($DELTA/100))"|bc)
      echo SPEED UP $DELTA%>> $DBG
   elif [ $THREADCOUNT -gt $UPPER ];then
      #if maxthread is active then slow down
      SPEEDFACTOR=$(echo "$SCALE;$SPEEDFACTOR*(1+($DELTA/100))"|bc)
      DELTA=1                            #begin fine grain control
      echo SLOW DOWN $DELTA%>> $DBG
   fi

   echo SPEEDFACTOR $SPEEDFACTOR >> $DBG

   #average thread duration   (total elapsed time / number of threads completed)
   #if threads completed is zero (less than 100), default to maxdelay/2  maxthreads

   COMPLETE=$(cat $IPC)

   if [ -z $COMPLETE ];then
      echo BAD IPC READ ============================================== >> $DBG
      return
   fi

   #echo Threads COMPLETE $COMPLETE >> $DBG
   if [ $COMPLETE -lt 100 ];then
      AVGTHREAD=$(echo "$SCALE;$MAXTHREADDUR/2"|bc)
   else
      ELAPSED=$[$(date +%s)-$START]
      #echo Elapsed Time $ELAPSED >> $DBG
      AVGTHREAD=$(echo "$SCALE;$ELAPSED/$COMPLETE*$THREADLIMIT"|bc)
   fi
   echo AVGTHREAD Duration is $AVGTHREAD >> $DBG

   #calculate timing to achieve spawning each workers fast enough
   # to utilize threadlimit - average time it takes to complete one thread / max number of threads
   TPS=$(echo "$SCALE;($AVGTHREAD/$THREADLIMIT)*$SPEEDFACTOR"|bc)
   #TPS=$(echo "$SCALE;$AVGTHREAD/$THREADLIMIT"|bc)  # maintains pretty good
   #echo TPS $TPS >> $DBG

}
function plot()
{
   echo -en \\033[${2}\;${1}H

   if [ -n "$3" ];then
         if [[ $4 = "good" ]];then
            echo -en "\\033[1;32m"
         elif [[ $4 = "warn" ]];then
            echo -en "\\033[1;33m"
         elif [[ $4 = "fail" ]];then
            echo -en "\\033[1;31m"
         elif [[ $4 = "crit" ]];then
            echo -en "\\033[1;31;4m"
         fi
   fi
      echo -n "$3"
      echo -en "\\033[0;39m"
}

trackthread()   #displays thread status
{
   WORKERID=$1
   THREADID=$2
   ACTION=$3    #setactive | setfree | update
   AGE=$4

   TS=$(date +%s)

   COL=$[(($WORKERID-1)/50)*40]
   ROW=$[(($WORKERID-1)%50)+1]

   case $ACTION in
      "setactive" )
         touch /tmp/$ME.$F1$WORKERID  #redundant - see main loop
         #echo created file $ME.$F1$WORKERID >> $DBG
         plot $COL $ROW "Worker$WORKERID: ACTIVE-TID:$THREADID INIT    " good
         ;;
      "update" )
         plot $COL $ROW "Worker$WORKERID: ACTIVE-TID:$THREADID AGE:$AGE" warn
         ;;
      "setfree" )
         plot $COL $ROW "Worker$WORKERID: FREE                         " fail
         rm /tmp/$ME.$F1$WORKERID
         ;;
      * )

      ;;
   esac
}

getfreeworkerid()
{
   for i in $(seq 1 $[$THREADLIMIT+1])
   do
      if [ ! -e /tmp/$ME.$F1$i ];then
         #echo "getfreeworkerid returned $i" >> $DBG
         break
      fi
   done
   if [ $i -eq $[$THREADLIMIT+1] ];then
      #echo "no free threads" >> $DBG
      echo 0
      #exit
   else
      echo $i
   fi
}

updateIPC()
{
   COMPLETE=$(cat $IPC)        #read IPC
   COMPLETE=$[$COMPLETE+1]     #increment IPC
   echo $COMPLETE > $IPC       #write back to IPC
}


worker()
{
   WORKERID=$1
   THREADID=$2
   #echo "new worker WORKERID:$WORKERID THREADID:$THREADID" >> $DBG

   #accessing common terminal requires critical blocking section
   (flock -x -w 10 201
      trackthread $WORKERID $THREADID setactive
   )201>/tmp/$ME.lock

   let "RND = $RANDOM % $MAXTHREADDUR +1"

   for s in $(seq 1 $RND)               #simulate random lifespan
   do
      sleep 1;
      (flock -x -w 10 201
         trackthread $WORKERID $THREADID update $s
      )201>/tmp/$ME.lock
   done

   (flock -x -w 10 201
      trackthread $WORKERID $THREADID setfree
   )201>/tmp/$ME.lock

   (flock -x -w 10 201
      updateIPC
   )201>/tmp/$ME.lock
}

threadcount()
{
   TC=$(ls /tmp/$ME.$F1* 2> /dev/null | wc -l)
   #echo threadcount is $TC >> $DBG
   THREADCOUNT=$TC
   echo $TC
}

status()
{
   #summary status line
   COMPLETE=$(cat $IPC)
   plot 1 $[$THREADLIMIT+2] "WORKERS $(threadcount)/$THREADLIMIT  SPAWNED $SPAWNED/$SPAWN  COMPLETE $COMPLETE/$SPAWN SF=$SPEEDFACTOR TIMING=$TPS"
   echo -en '\033[K'                   #clear to end of line
}

function main()
{
   while [ $SPAWNED -lt $SPAWN ]
   do
      while [ $(threadcount) -lt $THREADLIMIT ] && [ $SPAWNED -lt $SPAWN ]
      do
         WID=$(getfreeworkerid)
         worker $WID $SPAWNED &
         touch /tmp/$ME.$F1$WID    #if this loops faster than file creation in the worker thread it steps on itself, thread tracking is best in main loop
         SPAWNED=$[$SPAWNED+1]
         (flock -x -w 10 201
            status
         )201>/tmp/$ME.lock
         sleep $TPS
        if ((! $[$SPAWNED%100]));then
           #rethink thread timing every 100 threads
           threadspeed
        fi
      done
      sleep $TPS
   done

   while [ "$(threadcount)" -gt 0 ]
   do
      (flock -x -w 10 201
         status
      )201>/tmp/$ME.lock
      sleep 1;
   done

   status
}

clear
threadspeed
main
wait
status
echo
Josiah DeWitt
  • 1,594
  • 13
  • 15
1

Since for some reason I can't use wait, I came up with this solution:

# create a hashmap of the tasks name -> its command
declare -A tasks=(
  ["Sleep 3 seconds"]="sleep 3"
  ["Check network"]="ping imdb.com"
  ["List dir"]="ls -la"
)

# execute each task in the background, redirecting their output to a custom file descriptor
fd=10
for task in "${!tasks[@]}"; do
    script="${tasks[${task}]}"
    eval "exec $fd< <(${script} 2>&1 || (echo $task failed with exit code \${?}! && touch tasks_failed))"
    ((fd+=1))
done

# print the outputs of the tasks and wait for them to finish
fd=10
for task in "${!tasks[@]}"; do
    cat <&$fd
    ((fd+=1))
done

# determine the exit status
#   by checking whether the file "tasks_failed" has been created
if [ -e tasks_failed ]; then
    echo "Task(s) failed!"
    exit 1
else
    echo "All tasks finished without an error!"
    exit 0
fi
Tovask
  • 21
  • 1
0

Your script should look like:

prog1 &
prog2 &
.
.
progn &
wait
progn+1 &
progn+2 &
.
.

Assuming your system can take n jobs at a time. use wait to run only n jobs at a time.

amalik2205
  • 3,962
  • 1
  • 15
  • 21
0

If you're:

  • On Mac and have iTerm
  • Want to start various processes that stay open long-term / until Ctrl+C
  • Want to be able to easily see the output from each process
  • Want to be able to easily stop a specific process with Ctrl+C

One option is scripting the terminal itself if your use case is more app monitoring / management.

For example I recently did the following. Granted it's Mac specific, iTerm specific, and relies on a deprecated Apple Script API (iTerm has a newer Python option). It doesn't win any elegance awards but gets the job done.

#!/bin/sh
root_path="~/root-path"
auth_api_script="$root_path/auth-path/auth-script.sh"
admin_api_proj="$root_path/admin-path/admin.csproj"
agent_proj="$root_path/agent-path/agent.csproj"
dashboard_path="$root_path/dashboard-web"

osascript <<THEEND
tell application "iTerm"
  set newWindow to (create window with default profile)

  tell current session of newWindow
    set name to "Auth API"
    write text "pushd $root_path && $auth_api_script"
  end tell

  tell newWindow
    set newTab to (create tab with default profile)
    tell current session of newTab
        set name to "Admin API"
        write text "dotnet run --debug -p $admin_api_proj"
    end tell
  end tell

  tell newWindow
    set newTab to (create tab with default profile)
    tell current session of newTab
        set name to "Agent"
        write text "dotnet run --debug -p $agent_proj"
    end tell
  end tell

  tell newWindow
    set newTab to (create tab with default profile)
    tell current session of newTab
        set name to "Dashboard"
        write text "pushd $dashboard_path; ng serve -o"
    end tell
  end tell

end tell
THEEND

iTerm 2 screenshot multiple tabs result

Geoffrey Hudik
  • 536
  • 5
  • 9
0

If you have a GUI terminal, you could spawn a new tabbed terminal instance for each process you want to run in parallel.

This has the benefit that each program runs in its own tab where it can be interacted with and managed independently of the other running programs.

For example, on Ubuntu 20.04:

gnome-terminal --tab -- bash -c 'prog1'
gnome-terminal --tab -- bash -c 'prog2'

To run certain programs or other commands sequentially, you can add ;

gnome-terminal --tab -- bash -c 'prog1_1; prog1_2'
gnome-terminal --tab -- bash -c 'prog2'

I've found that for some programs, the terminal closes before they start up. For these programs I append the terminal command with ; wait or ; sleep 1

gnome-terminal --tab -- bash -c 'prog1; wait'

For Mac OS, you would have to find an equivalent command for the terminal you are using - I haven't tested on Mac OS since I don't own a Mac.

0

There're a lot of interesting answers here, but I took inspiration from this answer and put together a simple script that runs multiple processes in parallel and handles the results once they're done. You can find it in this gist, or below:

#!/usr/bin/env bash

# inspired by https://stackoverflow.com/a/29535256/2860309

pids=""
failures=0

function my_process() {
    seconds_to_sleep=$1
    exit_code=$2
    sleep "$seconds_to_sleep"
    return "$exit_code"
}

(my_process 1 0) &
pid=$!
pids+=" ${pid}"
echo "${pid}: 1 second to success"

(my_process 1 1) &
pid=$!
pids+=" ${pid}"
echo "${pid}: 1 second to failure"

(my_process 2 0) &
pid=$!
pids+=" ${pid}"
echo "${pid}: 2 seconds to success"

(my_process 2 1) &
pid=$!
pids+=" ${pid}"
echo "${pid}: 2 seconds to failure"

echo "..."

for pid in $pids; do
        if wait "$pid"; then
                echo "Process $pid succeeded"
        else
                echo "Process $pid failed"
                failures=$((failures+1))
        fi
done

echo
echo "${failures} failures detected"

This results in:

86400: 1 second to success
86401: 1 second to failure
86402: 2 seconds to success
86404: 2 seconds to failure
...
Process 86400 succeeded
Process 86401 failed
Process 86402 succeeded
Process 86404 failed

2 failures detected
therightstuff
  • 833
  • 1
  • 16
  • 21
-2

With bashj ( https://sourceforge.net/projects/bashj/ ) , you should be able to run not only multiple processes (the way others suggested) but also multiple Threads in one JVM controlled from your script. But of course this requires a java JDK. Threads consume less resource than processes.

Here is a working code:

#!/usr/bin/bashj

#!java

public static int cnt=0;

private static void loop() {u.p("java says cnt= "+(cnt++));u.sleep(1.0);}

public static void startThread()
{(new Thread(() ->  {while (true) {loop();}})).start();}

#!bashj

j.startThread()

while [ j.cnt -lt 4 ]
do
  echo "bash views cnt=" j.cnt
  sleep 0.5
done
Fil
  • 27
  • 4