When running Django/Celery/RabbitMQ on production server, some tasks are sent and consumed correctly. However, RabbitMQ starts using up all the CPU after processing is done. I believe this is related to the following report.
RabbitMQ on EC2 Consuming Tons of CPU
In that thread, it is suggested to set these config values:
- CELERY_IGNORE_RESULT
- CELERY_AMQP_TASK_RESULT_EXPIRES
I forked and customized the celery-haystack package to set both those values when calling appl_async(), however it seems to have had no effect.
I think Celery is creating a large number (one per task) of uid-named queues automatically to store results. But I don't seem to be able to stop it.
Any ideas?