0

I have some expensive (slow) operations that could run in parallel

SERVERS=$(cmd1)
ROUTERS=$(cmd2)
NETWORKS=$(cmd3)
KEYPAIRS=$(cmd3)

I do want to speedup this by running these in parallel, but without using ugly hacks like redirecting their output to files and loading the files after their execution.

Is there a nice way to parallelize this in bash?

sorin
  • 161,544
  • 178
  • 535
  • 806

2 Answers2

2

One thought that comes to mind straight away is temp files. So run your jobs in the background, redirect to a file, wait for the jobs to complete & read the files.

( job1 > /tmp/job1.out 2> /dev/null ; echo $? > /tmp/job1.ret ) &
( job2 > /tmp/job2.out 2> /dev/null ; echo $? > /tmp/job2.ret ) &
( job3 > /tmp/job3.out 2> /dev/null ; echo $? > /tmp/job3.ret ) &

wait

if [[ $(cat /tmp/job1.ret) -eq 0 ]] ; then job1_out=$(cat /tmp/job1.out) ; fi
if [[ $(cat /tmp/job2.ret) -eq 0 ]] ; then job2_out=$(cat /tmp/job2.out) ; fi
if [[ $(cat /tmp/job3.ret) -eq 0 ]] ; then job3_out=$(cat /tmp/job3.out) ; fi

# the rest
C Emery
  • 21
  • 2
  • Yep but this created temp files, not the best experience. I am still searching for alternatives without temp files. – sorin Aug 11 '17 at 10:46
1

I think GNU parset is what you are looking for. In your case it would look like this:

parset "SERVERS ROUTERS NETWORKS KEYPAIRS" :::: cmd1 cmd2 cmd3 cmd4
piarston
  • 1,685
  • 1
  • 13
  • 25
  • This may work but I am not willing to rely on tools that are from available by default on Linux or MacOS, as I want to keep the solution free of dependencies. – sorin Aug 11 '17 at 17:06