I have an R analysis composed of three parts (partA
, partB
, and partC
). I submit each part to SLURM (e.g. sbatch partA
), and each part is parallelized via #SBATCH --array=1-1500
. The parts are in serial, so I need to wait for one to finish before starting the next. Right now I'm manually starting each job, but that's not a great solution.
I would like to automate the three sbatch calls. For example:
sbatch partA
- when
partA
is done,sbatch partB
- when
partB
is done,sbatch partC
I used this solution to get the job ID of partA
, and pass that to strigger
to accomplish step 2 above. However I'm stuck at that point, because I don't know how to get the job ID of partB
from strigger
. Here's what my code looks like:
#!/bin/bash
# step 1: sbatch partA
partA_ID=$(sbatch --parsable partA.sh)
# step 2: sbatch partB
strigger --set --jobid=$partA_ID --fini --program=/path/to/partB.batch
# step 3: sbatch partC
... ?
How do I complete step 3?