0

I have multiple scripts that I launch in background. What I want to do is to launch another script when any of previous has finished execution (theese scripts have unpredictable execution time, so I don't know which of them finish first).

Something like this

exec ./MyScript1.sh &
exec ./MyScript2.sh &
exec ./MyScript3.sh &

#Now wait until any of them finishes and launch ./MyScript4.sh
#What do I have to do?

I've read about wait shell builtin but it waits for all jobs to finish, and I need when only one of them does this. Any ideas?

Alexey Vukolov
  • 223
  • 1
  • 3
  • 13
  • you canrefer this [link](http://stackoverflow.com/questions/7485776/start-script-after-another-one-already-running-finishes) – Dharmesh May 28 '14 at 08:31
  • Use [gnu parallel](http://www.gnu.org/software/parallel/) – choroba May 28 '14 at 08:38
  • First link is not useful for me as it suggests polling. However this takes time, and my scripts are speed tests, and loosing a second is not an option. – Alexey Vukolov May 28 '14 at 11:10
  • Can you not just have them run another script when they finish ? –  May 28 '14 at 14:31
  • What do `MyScript{1,2,3}.sh` provide that `MyScript4.sh` requires, that it doesn't matter which one provides it? – chepner May 28 '14 at 14:39

3 Answers3

0

Start a long-running process which each of the three background processes will try to kill once they've completed, then wait for that process to die before running MyScript4.sh. This does require that you not use exec to execute MyScript{1,2,3}.sh, so hopefully that is not a hard requirement.

# Assuming that 100,000 seconds is long enough 
# for at least one bg job to complete
sleep 100000 & sleep_pid=$!

{ MyScript1.sh; kill $sleep_pid 2>/dev/null; } &
{ MyScript2.sh; kill $sleep_pid 2>/dev/null; } &
{ MyScript3.sh; kill $sleep_pid 2>/dev/null; } &
wait $sleep_pid
MyScript4.sh

A similar option is to use a blocking read on a named pipe. (As presented, this has the additional drawback that only one of the background jobs can finish, as the other two will block trying to write to starter until somebody reads two more lines from it.)

mkfifo starter
{ MyScript1.sh; echo foo > starter; } &
{ MyScript2.sh; echo foo > starter; } &
{ MyScript3.sh; echo foo > starter; } &
read < starter && MyScript4.sh
chepner
  • 497,756
  • 71
  • 530
  • 681
0

I would recommend you to start a counter that increment a 1 before the execution of each script and decrement 1 after it finishes. This way you know how many are running simultaneously. Then you just need to watch where this value is higher or lower than a certain threshold (lets say 3) and run an additional process.

let me give an example:

run_scripts.sh:

#!/bin/bash

#allowed simultaneous scripts 
num_scripts=3

#initialize counter
script_counter=0
echo $script_counter
echo $script_counter > ./script_counter

# for loop to run the scripts
for i in `seq 1 10`
do
    ./script1.sh &
    sleep 0.1
    script_counter=`head -n 1 ./script_counter`
    echo $script_counter

    # wait until the number of running scripts is lower than 4
    while [ $script_counter -gt $num_scripts  ]
    do
        sleep 0.5
        script_counter=`head -n 1 ./script_counter`
    done
done

script1.sh :

#!/bin/bash

# increment counter
script_counter=`head -n 1 ./script_counter`
script_counter=$(($script_counter + 1))
echo $script_counter > ./script_counter
#echo $script_counter

# your code goes here
rand_numb=$(($RANDOM%10+1))
sleep $rand_numb
######

# decrement counter
script_counter=`head -n 1 ./script_counter`
script_counter=$(($script_counter - 1))
echo $script_counter > ./script_counter
#echo $script_counter
ASantosRibeiro
  • 1,247
  • 8
  • 15
0

Add these lines at the end of your MyScript1-3.sh

if mkdir /tmp/mylock ; then
  ./MyScript4.sh
fi

This use /tmp/mylock as a lock to synchronize process. Only the first process that run mkdir command is success and get into if block. The others will all fail.

3329
  • 1,411
  • 13
  • 17