12

I am using http://python-rq.org/ to queue and execute tasks on Heroku worker dynos. These are long-running tasks and occasionally I need to cancel them in mid-execution. How do I do that from Python?

from redis import Redis
from rq import Queue
from my_module import count_words_at_url

q = Queue(connection=Redis())
result = q.enqueue(
             count_words_at_url, 'http://nvie.com')

and later in a separate process I want to do:

from redis import Redis
from rq import Queue
from my_module import count_words_at_url

q = Queue(connection=Redis())
result = q.revoke_all() # or something

Thanks!

Charles Offenbacher
  • 3,094
  • 3
  • 31
  • 38

3 Answers3

13

If you have the job instance at hand simply

job.cancel()

Or if you can determine the hash:

from rq import cancel_job
cancel_job('2eafc1e6-48c2-464b-a0ff-88fd199d039c')

http://python-rq.org/contrib/

But that just removes it from the queue; I don't know that it will kill it if already executing.

You could have it log the wall time then check itself periodically and raise an exception/self-destruct after a period of time.

For manual, ad-hoc style, death: If you have redis-cli installed you can do something drastic like flushall queues and jobs:

$ redis-cli
127.0.0.1:6379> flushall
OK
127.0.0.1:6379> exit

I'm still digging around the documentation to try and find how to make a precision kill.

Not sure if that helps anyone since the question is already 18 months old.

John Mee
  • 50,179
  • 34
  • 152
  • 186
4

I think the most common solution is to have the worker spawn another thread/process to do the actual work, and then periodically check the job metadata. To kill the task, set a flag in the metadata and then have the worker kill the running thread/process.

sheridp
  • 1,386
  • 1
  • 11
  • 24
  • Can you please elaborate for how the worker may access the job metadata? Are you sure the job metadata is a accessible for the worker dynamically, i.e. that changes in the job metadata are reflected in realtime to the worker? – Shahar Gino Oct 27 '21 at 06:08
0

From the docs:

You can use send_stop_job_command() to tell a worker to immediately stop a currently executing job. A job that’s stopped will be sent to FailedJobRegistry.

from redis import Redis
from rq.command import send_stop_job_command

redis = Redis()
send_stop_job_command(redis, job_id)
Eric
  • 356
  • 3
  • 9