2

I am running snakemake on the login node of a cluster using the slurm job queuing system.

Unfortunately, if a slurm job times out, or if it gets killed with scancel, the snakemake script inside the job does not delete the output files, since it gets killed as well.

The master snakemake process however stays alive in these circumstances, so I would like this process to delete the output files of the failed child jobs. Is there an option for that, or some hack that does this?

  • take a look here: https://stackoverflow.com/questions/52500725/snakemake-hangs-when-cluster-slurm-cancelled-a-job – Maarten-vd-Sande Jan 28 '21 at 20:01
  • Thank you @Maarten-vd-Sande. While I solved the issue the person in that question had, my issue is different. But it is apparently also a bug in snakemake: https://github.com/snakemake/snakemake/issues/627 Anyways, if someone has an idea or experience of how to work around this until it gets resolved, I would be very thankful. – Sebastian Schmidt Jan 29 '21 at 05:55
  • 1
    I made a pull request for this issue now, let's hope it gets merged soon: https://github.com/snakemake/snakemake/pull/857 – Sebastian Schmidt Jan 29 '21 at 11:22

0 Answers0