8

I feel a bit stupid for asking, but it doesn't appear to be in the documentation for RQ. I have a 'failed' queue with thousands of items in it and I want to clear it using the Django admin interface. The admin interface lists them and allows me to delete and re-queue them individually but I can't believe that I have to dive into the django shell to do it in bulk.

What have I missed?

Joe
  • 46,419
  • 33
  • 155
  • 245
  • If anyone looking for a solution specific to **django-rq>=2.0.0**. [This link](https://github.com/rq/rq/issues/964) can help. – Saurav Kumar Nov 19 '19 at 10:14

6 Answers6

17

The Queue class has an empty() method that can be accessed like:

import django_rq
q = django_rq.get_failed_queue()
q.empty()

However, in my tests, that only cleared the failed list key in Redis, not the job keys itself. So your thousands of jobs would still occupy Redis memory. To prevent that from happening, you must remove the jobs individually:

import django_rq
q = django_rq.get_failed_queue()
while True:
    job = q.dequeue()
    if not job:
        break
    job.delete()  # Will delete key from Redis

As for having a button in the admin interface, you'd have to change django-rq/templates/django-rq/jobs.html template, who extends admin/base_site.html, and doesn't seem to give any room for customizing.

augustomen
  • 8,977
  • 3
  • 43
  • 63
  • That second snippet is huge, couldn't figure out why we were having a memory leak with leftover keys until that. For anyone that has already dequeued the jobs like I had (and therefore lost access to the keys via python-rq), use the conn = redis.from_url(redis_url); conn.keys() method to get them back for deletion. – Charles Offenbacher Dec 24 '14 at 18:42
  • @CharlesOffenbacher I'm not quite sure if I understand the snippets above. Are you saying that we need to run both of the code snippets above but run the second snippet first and the first snippet second? Thanks. – Jim Dec 23 '15 at 23:14
  • @Robert I believe you should use the second snippet only. The first code snippet only clears the list that keeps track of failed jobs, not the actual failed jobs themselves. – Charles Offenbacher Dec 24 '15 at 22:28
  • 1
    This is no longer working in **django-rq==2.1.0**. There is a change as mentioned in [this post](https://github.com/rq/rq/issues/964). I tried to use as per the new code changes but getting errors.. Have also put a comment on the same post for proper use. Hope will get a solution soon.. – Saurav Kumar Nov 13 '19 at 13:09
  • I've created a Django management command to do bulk deletes of failed Django RQ jobs. The GitHub gist is [here](https://gist.github.com/jbarham/de89f85e900bf5f3adce3f0e4d65c5d9). – jbarham Feb 03 '21 at 08:12
2

The redis-cli allows FLUSHDB, great for my local environment as I generate a bizzallion jobs.

With a working Django integration I will update. Just adding $0.02.

Marc
  • 1,895
  • 18
  • 25
  • Won't this affect non-failed jobs? – Joe Oct 23 '13 at 09:23
  • Any idea why existing jobs would jump back into the queue once I restart my django and redis server after calling flushdb? – andyzinsser Jan 28 '14 at 21:43
  • @Joe in general yes, current jobs will go away. – Marc Jan 31 '14 at 03:12
  • 1
    @andyzinsser - I have had Python scripts running that would queue more jobs (one option). Also, you could have a stack of Python to call (FIFO backlog) - with a clear queue more marches in... RQ is somewhat a blackbox, when I wanted to clear **100% for sure** I would stop all Python processes (restart) and when the service was down flush and clear. Remember, you can "query" for jobs and processes to learn more about what is happening. I learn more everyday. – Marc Jan 31 '14 at 03:12
1

You can empty any queue by name using following code sample:

import django_rq

queue = "default"
q = django_rq.get_queue(queue)
q.empty()

or even have Django Command for that:

import django_rq

from django.core.management.base import BaseCommand


class Command(BaseCommand):
    def add_arguments(self, parser):
        parser.add_argument("-q", "--queue", type=str)

    def handle(self, *args, **options):
        q = django_rq.get_queue(options.get("queue"))
        q.empty()

wowkin2
  • 5,895
  • 5
  • 23
  • 66
0

As @augusto-men method seems not to work anymore, here is another solution:

You can use the the raw connection to delete the failed jobs. Just iterate over rq:job keys and check the job status.

from django_rq import get_connection
from rq.job import Job

# delete failed jobs
con = get_connection('default')
for key in con.keys('rq:job:*'):
    job_id = key.decode().replace('rq:job:', '')
    job = Job.fetch(job_id, connection=con)
    if job.get_status() == 'failed':
        con.delete(key)
con.delete('rq:failed:default')  # reset failed jobs registry
Jac0lius
  • 26
  • 4
0

The other answers are outdated with the RQ updates implementing Registries.

Now, you need to do this to loop through and delete failed jobs. This would work any particular Registry as well.

import django_rq
from rq.registry import FailedJobRegistry

failed_registry = FailedJobRegistry('default', connection=django_rq.get_connection())

for job_id in failed_registry.get_job_ids():
    try:
        failed_registry.remove(job_id, delete_job=True)
    except:
        # failed jobs expire in the queue. There's a
        # chance this will raise NoSuchJobError
        pass

Source

Dougyfresh
  • 586
  • 3
  • 15
0

You can empty a queue from the command line with:

rq empty [queue-name]

Running rq info will list all the queues.

Ben Sturmfels
  • 1,303
  • 13
  • 22