23

I have a web application using Django and i am using Celery for some asynchronous tasks processing.

For Celery, i am using Rabbitmq as a broker, and Redis as a result backend.

Rabbitmq and Redis are running on the same Ubuntu 14.04 server hosted on a local virtual machine.

Celery workers are running on remote machines (Windows 10) (no worker are running on the Django server).

i have three issues (i think they are related somehow !).

  1. The tasks stay in the 'PENDING' state no matter if the tasks are succeeded or failed.
  2. the tasks doesn't retry when failed. and i get this error when trying to retry :

reject requeue=False: [WinError 10061] No connection could be made because the target machine actively refused it

  1. The results backend doesn't seems to work.

i am also confused about my settings, and i don't know exactly where this issues might come from !

so here is my settings so far:

my_app/settings.py

# region Celery Settings
CELERY_CONCURRENCY = 1
CELERY_ACCEPT_CONTENT = ['json']
# CELERY_RESULT_BACKEND = 'redis://:C@pV@lue2016@cvc.ma:6379/0'
BROKER_URL = 'amqp://soufiaane:C@pV@lue2016@cvc.ma:5672/cvcHost'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_ACKS_LATE = True
CELERYD_PREFETCH_MULTIPLIER = 1

CELERY_REDIS_HOST = 'cvc.ma'
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 0
CELERY_RESULT_BACKEND = 'redis'
CELERY_RESULT_PASSWORD = "C@pV@lue2016"
REDIS_CONNECT_RETRY = True

AMQP_SERVER = "cvc.ma"
AMQP_PORT = 5672
AMQP_USER = "soufiaane"
AMQP_PASSWORD = "C@pV@lue2016"
AMQP_VHOST = "/cvcHost"
CELERYD_HIJACK_ROOT_LOGGER = True
CELERY_HIJACK_ROOT_LOGGER = True
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
# endregion

my_app/celery_settings.py

from __future__ import absolute_import
from django.conf import settings
from celery import Celery
import django
import os

# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_app.settings')
django.setup()
app = Celery('CapValue', broker='amqp://soufiaane:C@pV@lue2016@cvc.ma/cvcHost', backend='redis://:C@pV@lue2016@cvc.ma:6379/0')

# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)


@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))

my_app__init__.py

from __future__ import absolute_import

# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.

from .celery_settings import app as celery_app

my_app\email\tasks.py

from __future__ import absolute_import
from my_app.celery_settings import app

# here i only define the task skeleton because i'm executing this task on remote workers !
@app.task(name='email_task', bind=True, max_retries=3, default_retry_delay=1)
def email_task(self, job, email):
    try:
        print("x")
    except Exception as exc:
        self.retry(exc=exc)

on the workers side i have one file 'tasks.py' which have the actual implementation of the task:

Worker\tasks.py

from __future__ import absolute_import
from celery.utils.log import get_task_logger
from celery import Celery


logger = get_task_logger(__name__)
app = Celery('CapValue', broker='amqp://soufiaane:C@pV@lue2016@cvc.ma/cvcHost', backend='redis://:C@pV@lue2016@cvc.ma:6379/0')

@app.task(name='email_task', bind=True, max_retries=3, default_retry_delay=1)
def email_task(self, job, email):
    try:
        """
        The actual implementation of the task
        """
    except Exception as exc:
        self.retry(exc=exc)

what i did notice though is:

  • when i change the broker settings in my workers to a bad password, i get could not connect to broker error.
  • when i change the result backend settings in my workers to a bad password, it runs normally as if everything is OK.

What could be possibly causing me those problems ?

EDIT

on my Redis server, i already enabled remote connection

/etc/redis/redis.conf

... bind 0.0.0.0 ...

Soufiaane
  • 1,737
  • 6
  • 24
  • 45
  • 2
    looks like your result backend is not configured correctly. – scytale Feb 22 '16 at 10:27
  • @scytale how is that ?? – Soufiaane Feb 22 '16 at 12:40
  • cause that's what it looks like. try query redis from the django and celery servers. I'm not familiar with how to configure django/celery - you have duplicated the celery configuration in `my_app/settings.py` and `my_app/celery_settings.py`, and you don't have a `celeryconfig.py` (which is usual in stand-alone celery) - is this the recommended way to do things? what documentation are you using? – scytale Feb 22 '16 at 14:47
  • @scytale 1 - from both django and celery worksers machines i run: `>>> import redis >>> pool = redis.ConnectionPool(host='cvc.ma', port=6379, db=0, password='C@pV@lue2016') >>> r = redis.Redis(connection_pool=pool) >>> r.set('foo', 'bar') >>>True` so the redis configuration seems to be fine. 2 - only after duplicating some settings from `my_app/settings.py` to `my_app/celery_settings.py` that i managed to made Celery workers and django server working together. – Soufiaane Feb 22 '16 at 15:07
  • @scytale i am not using any specific documentation, i am just trying to figure things through official Celery documentation and here to make the app works the way i wanted it to be. so may be it is not the recommended way to do things, that's why i'm posting this here. – Soufiaane Feb 22 '16 at 15:09
  • Are you seeing the tasks in rabbitmq? Run `sudo rabbitmqctl list_queues -p cvcHost`. I think the result backend doesn't seem to work because your tasks aren't completing. – Adi Krishnan Feb 23 '16 at 11:10
  • @AdiKrishnan the results of the command shows me the actual queues i had set up, and as i run it again, the tasks in queues are decremented. and the status from the workers is success. – Soufiaane Feb 24 '16 at 22:58
  • Can you try changing the password as @gal-ben-david suggests? It appears that the execution is completing but the results are not getting saved which in turn causes the task to be shown as PENDING. Do your worker logs show any exception? – Adi Krishnan Feb 25 '16 at 10:48
  • Are you sure all ports are open for communication on your servers? Can you check with ufw? – Romeo Mihalcea Feb 29 '16 at 00:50

3 Answers3

11

My guess is that your problem is in the password. Your password has @ in it, which could be interpreted as a divider between the user:pass and the host section.

The workers stay in pending because they could not connect to the broker correctly. From celery's documentation http://docs.celeryproject.org/en/latest/userguide/tasks.html#pending

PENDING Task is waiting for execution or unknown. Any task id that is not known is implied to be in the pending state.

Gal Ben David
  • 410
  • 4
  • 10
  • i did change my passwords to a simple ones for test purpose, and i sitll have the same problem. if the workers can't connect to the broker, then haw my tasks are processed ? – Soufiaane Feb 25 '16 at 21:44
  • Lets do some trials. Please try to remove the result backend from the Celery instance. Try adding ignore_result = True. http://docs.celeryproject.org/en/latest/userguide/tasks.html#Task.ignore_result – Gal Ben David Feb 26 '16 at 17:55
  • sorry Gal Ben David i was away from work. so i did try with ignore_result = True and removed the result backend. the worker indicated that the results are disabled and when i try task.state when i subbmit a task i get an error : AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for' – Soufiaane Mar 05 '16 at 22:01
  • In that case, "apply" method won't work, because it waits for the execution of the task to finish, and return the returned value, which is disabled at the moment. Anyway, try doing that with "apply_async" and check if the task has run successfully. If that's the case, it is obvious that the problem is somehow with the backend server, and we could go forward from that point by isolating piece by piece. – Gal Ben David Mar 06 '16 at 19:19
  • I wasn't using the apply methode, i was using apply_async from the beguining. – Soufiaane Mar 06 '16 at 19:47
  • it's a bit odd, this error shows up when you wait for a result, and when you have a result backend. Can you post the whole code so I could try solve it within a running system? – Gal Ben David Mar 08 '16 at 09:03
3

I had a setup where the 'single instance servers' (the dev and locahost servers) were working, but not when the redis server was another server. The celery tasks were working correctly, but not the getting result. I was getting the following error just when trying to get the result of a task:

Error 111 connecting to localhost:6379. Connection refused.

What made it work was simply adding that setting to Django:

CELERY_RESULT_BACKEND = 'redis://10.10.10.10:6379/0'

It seems like if this parameter isn't present, it'll default to localhost to get the result of the tasks.

mrmuggles
  • 2,081
  • 4
  • 25
  • 44
  • I see the same error, even tough my settings are like this (i want to use redis) i get `[2018-02-27 13:18:18,494: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**@127.0.0.1:5672//: [Errno 111] Connection refused. Trying again in 28.00 seconds... ` – ilhnctn Feb 27 '18 at 13:19
  • I'm using docker & docker-compose, there is no custom firewall or something else. also, the same structure works in another project but not in current. i'm missing a point but not suer what @mrmuggles – ilhnctn Feb 27 '18 at 19:58
  • Ah sorry, it seems like it's still trying to connect to the localhost instead of a remote server? Also, if you use redis, other settings may be missing in your configuration files, the 'CELERY_RESULT_BACKEND' isn't sufficient. You need to also use to set this one: BROKER_URL = 'redis://100.100.100.1000:6379/0'. Also when you initialize the app: app = Celery('blcorp', backend='redis') – mrmuggles Feb 27 '18 at 20:14
  • Thank you for your answer, it's solved now but i'm not sure what was wrong before :) – ilhnctn Feb 28 '18 at 13:13
2

Adding CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True to my settings resolved this problem for me.

Shaun Kruger
  • 500
  • 3
  • 7