4

This example works on dev environment. On Heroku tasks get queued but are not consumed. Any ideas what I might be doing wrong?

RabbitMQ Dashboard Shows:

Name Parameters Policy State Ready Unacked Total 1f49ea51a56049f7a68082c6297ea080 Exp D AD HA idle 1 0 1 253eb525c95944d2b742f1f112cdc0e5 Exp D AD HA idle 1 0 1

Proc File

web: gunicorn hellodjango.wsgi --workers 1
celery: python manage.py celery worker -E --time-limit=1200 --loglevel=ERROR

Setting.py

from os import environ
CELERY_RESULT_BACKEND = "amqp"
BROKER_POOL_LIMIT = 0
BROKER_URL = environ.get('CLOUDAMQP_URL', '')
CELERY_TASK_RESULT_EXPIRES = 14400

View

from django.shortcuts import render
from django.http import HttpResponse
from proj.tasks import add_to_count
from models import SampleCount
import logging
def test_async(request):
    sc = add_to_count.delay()
    count = SampleCount.objects.all()[0].num 
    return HttpResponse("test count: %s sc: %s name: %s " %(count,sc,add_to_count.name ))

Model

from django.db import models
# Create your models here.
class SampleCount(models.Model):
    num = models.IntegerField(default=0)

Tasks.py

from __future__ import absolute_import
from celery import shared_task
from proj.models import SampleCount
from celery import task
@task(name='proj.tasks')
def add_to_count():
    try:
        sc = SampleCount.objects.get(pk=1)
    except:
        sc = SampleCount()
    sc.num = sc.num + 2
    sc.save()
    return(sc)
Carlos Ferreira
  • 1,980
  • 2
  • 14
  • 18

2 Answers2

1

I have the same problem and in my case I think it something about naming. In your procfile you were using manage.py whereas heroku recommend something like:

worker: celery worker --app=tasks.app
hum3
  • 1,563
  • 1
  • 14
  • 21
0

limit the concurrency of your celery worker, -c 1

Carl Hörberg
  • 5,973
  • 5
  • 41
  • 47