3

Whenever we are trying to see queue info using rq info -u <<redis_url>> command, we are getting lots of extra entries like this -

a331d42408099f7e5ec9c5864 (None None): ?
c352af4c2385cdf320d7b74897 (None None): ?
134174815b44c44d706417eb0 (None None): ?
7b3314c8696c483b3a0a08a27 (None None): ?
15f1bb4bc78f1465076d638b5e (None None): ?

They do not belong to any queue and they are just hanging in there. Questions are -
What are they?
How to clear them from redis?

More details - python jobs are queued in redis. rq, version 1.5.0 python 3.x

Koushik Roy
  • 6,868
  • 2
  • 12
  • 33

1 Answers1

1

The entries you see appear to be zombie workers. There can be different reasons for them to show up when rq info is queried. One possible cause would be a job taking longer than allowed and it's horse is killed. The worker process in-turn becomes a zombie (it sounds like a scary movie).

The discussion regarding the "zombie workers" was before the release of v1.6.0. To the best of my knowledge it was resolved from that point on. The latest version as of this answer is v1.10.1, and I would imagine a happy ending to this story if you could update your version to a more recent one.

l'L'l
  • 44,951
  • 10
  • 95
  • 146
  • very nice answer, this is what i was looking for. It make sense. Unfortunately we can not update the rq. Is there any alternate ways ( like restart the redis) to clean them? They are increasing in production. – Koushik Roy Jul 29 '22 at 09:44
  • @KoushikRoy: There are a couple of scripts ([1](https://github.com/rq/rq/issues/787#issuecomment-289239830), [2](https://github.com/rq/rq/issues/787#issuecomment-337801230)) shown within the Github discussion that you could probably adapt fairly easily to suit your needs. You could create a cron job perhaps and run the script at a regular interval to clean out the processes. I haven't tested any of the scripts mentioned, although it's likely going to be your only option since you can't update `rq` – l'L'l Jul 30 '22 at 04:47