I have a series of servers running multiple workers. These are long-running tasks, requiring anywhere from 10 minutes to 36 hours. As such, I'd like to avoid prefetching if at all possible and have each worker pick up a singular new task after completing.
I'm using the celeryd
init.d service, and have
CELERYD_ACKS_LATE = True
CELERYD_PREFETCH_MULTIPLIER = 1
in /etc/default/celeryd
of the worker server.
However, if I go to the python shell console and do
from work_project.celery import app
inspector = app.control.inspect()
inspector.stats()
I get this in the dict output:
...
u'prefetch_count': 4,
...
I'm using RabbitMQ as the broker and Redis as the backend. I suspect this prefetching is having the effect of several workers remaining idle after completing their initial task because other workers have pending tasks in their queues. For example, currently I have two servers with nine workers total running. At the start of a 20 task batch, all nine were running concurrently. Now, approximately 90m later, only six workers are active.