1

I have a simple loop, looking for all files in a directory. Inside the loop, a command is executed in the background with &. I then have another loop that will wait for all the processes to complete and will check the return code to make sure none of them failed. If any failed, the entire script must fail. This approach does work but the output from all the background processes is mixed together.

#!/bin/bash

for f in $(find tests -name '*.test.php')
do
    phpunit "$f" &
done

FAIL=0

for job in `jobs -p`
do
    wait $job || let "FAIL=$?"
done

exit $FAIL

I can make every process output only when they are finished by executing each command in a subshell like this

echo "$(phpunit "$f")" &

Now the output looks great but there's no obvious way to get the return code. $? will give me the return code of echo which is always 0 and it breaks checking if a test failed.

Is there a way to get a nice output (all at once when finished) and check the return value at the same time?

I thought about directing the outputs to files but how am I going to echo them after wait? Actually, I'd like to avoid writing to files if possible.

Edit This should never have been marked duplicate, because now I can't properly answer my own question... Here is the solution I found:

In the beginning of the script I added set -o pipefail

In the original solution I replaced phpunit "$f" & with phpunit "$f" | php -r "echo file_get_contents('php://stdin');" &

With the moreutils package installed, this can also be phpunit "$f" | sponge &

That's it. When a test finishes running, it outputs. When a test fails, the exit code of the main script is 1. When everything passes it's 0. Complete script:

#!/bin/bash

set -o pipefail

for f in $(find tests -name '*.test.php')
do
    phpunit "$f" | sponge &
done

FAIL=0

for job in `jobs -p`
do
    wait $job || let "FAIL=$?"
done

exit $FAIL
rinu
  • 989
  • 1
  • 7
  • 14

0 Answers0