I am running snakemake on the login node of a cluster using the slurm job queuing system.
Unfortunately, if a slurm job times out, or if it gets killed with scancel
, the snakemake script inside the job does not delete the output files, since it gets killed as well.
The master snakemake process however stays alive in these circumstances, so I would like this process to delete the output files of the failed child jobs. Is there an option for that, or some hack that does this?