I have a celery running in k8 pod. This is my manifest for celery
apiVersion: apps/v1
kind: Deployment
metadata:
name: celery
labels:
deployment: celery
spec:
replicas: 2
selector:
matchLabels:
pod: celery
template:
metadata:
labels:
pod: celery
spec:
containers:
- name: celery
image: local_celery:latest
imagePullPolicy: Never
command: ['celery', '-A', 'proj', 'worker', '-E', '-l', 'info',]
resources:
limits:
cpu: 50m
requests:
cpu: 50m
terminationGracePeriodSeconds: 25
My Celery Configs in django settings.py are
CELERY_TASK_ACKS_LATE = True
CELERY_WORKER_PREFETCH_MULTIPLIER = 1
CELERY_BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'django-db'
CELERY_WORKER_CONCURRENCY=1
CEELERY_TASK_REJECT_ON_WORKER_LOST=True
When I run a simple django app with celery and redis as message broker, My task get re-queued into broker when i do ctrl-C
to initiate a warm shutdown for the worker. But when the same application is deployed to kubernetes with celery, django and redis running in 3 different pods
my tasks aren't re-queued back to redis when celery pod is gracefully terminated. I am unable to understand why? My celery settings are unchanged in both cases.