10

I'm running a bunch of shell scripts like parallel -a my_scripts bash and at some point I decided I've run enough of them and would like to stop spawning new jobs, and simply let all the existing jobs finish. Put another way, I want to kill the parent process without killing the children.

There seems to be ways of controlling termination when first launching GNU parallel (for example if I know in advance I only want to run x jobs, then I can use --halt now,success=x argument), but I couldn't find how to control GNU parallel when it is already running.

Sure I can just CTRL+C to kill parallel, and rerun the jobs that were aborted, but I thought there might be a smarter way.

Yibo Yang
  • 2,353
  • 4
  • 27
  • 40
  • 1
    There may be a way to do it with **GNU Parallel**, but another option might be to use Redis like this... https://stackoverflow.com/a/22220082/2836621 – Mark Setchell Jul 17 '17 at 17:34
  • **Task Spooler** may be another possibility... http://www.ubuntugeek.com/task-spooler-personal-job-scheduler.html – Mark Setchell Jul 17 '17 at 23:01

2 Answers2

14

Update:

If you have a new version of parallel >= 20190322: as per @Bowi's comment, "since 2019, it is no longer SIGTERM, it's SIGHUP instead:

kill -HUP $PARALLEL_PID

SIGTERM now terminates the children without letting them finish."

See:

Original answer:

I figured it out. The answer is to simply send SIGTERM to the parent parallel process (just killing its PID will do). parallel then responds with the following (in this case I have 4 jobs running):

parallel: SIGTERM received. No new jobs will be started.
parallel: Waiting for these 4 jobs to finish. Send SIGTERM again to stop now.

I dug it out of the man page:

COMPLETE RUNNING JOBS BUT DO NOT START NEW JOBS
If you regret starting a lot of jobs you can simply break GNU parallel, but if you want to make sure you do not have half-completed jobs you should send the signal SIGTERM to GNU parallel:

killall -TERM parallel

This will tell GNU parallel to not start any new jobs, but wait until the currently running jobs are finished before exiting.

Yan Foto
  • 10,850
  • 6
  • 57
  • 88
Yibo Yang
  • 2,353
  • 4
  • 27
  • 40
  • SIGTERM is very different from SIGKILL. Please fix your answer. SIGKILL will not do what you want. – Zan Lynx Jul 18 '17 at 02:16
  • 1
    Please let us know if you feel how it could have been documented better, so it would have been easier for you to find. – Ole Tange Jul 18 '17 at 06:12
  • 1
    @OleTange thanks for making this amazing tool available. Perhaps this could be mentioned in the `Termination` section of the [tutorial](https://www.gnu.org/software/parallel/parallel_tutorial.html#Termination), given its common use case? That's the first search result when I googled "GNU parallel terminate". – Yibo Yang Jul 18 '17 at 14:16
  • **Note**: Since 2019, it is no longer `SIGTERM`, it's `SIGHUP` instead. `SIGTERM` now terminates the children without letting them finish. https://superuser.com/q/1660301/662242 – Bowi Jul 08 '21 at 08:16
0

Small addition if you use parallel inside a simple script:
If you use parallel (<= 20190322) in a simple script.sh that looks something like this:

#!/bin/bash
cd $(dirname $0)

seq -w 0 $(($1-1)) | parallel -j+0 --progress ./something.exe {}

you can look up the PID of the child process of script.sh (using pgrep -P $scriptsh_pid) and then use

kill -TERM $child_process_pid

If I only run script.sh in my shell, killall -TERM perl also works.