9

I am working on a Django app locally that needs to take a CSV file as input and run some analysis on the file. I am running Celery, RabbitMQ, and web server locally. When I import the file, I see the following error on the Celery server:

[2015-12-11 16:58:53,906: WARNING/MainProcess] celery@Joes-MBP ready.
[2015-12-11 16:59:11,068: ERROR/MainProcess] Task program_manager.tasks.analyze_list_import_program[db22de16-b92f-4220-b2bd-5accf484c99a] raised unexpected: WorkerLostError('Worker exited prematurely: signal 11 (SIGSEGV).',)
Traceback (most recent call last):
File "/Users/joefusaro/rl_proto2/venv/lib/python2.7/site-packages/billiard/pool.py", line 1175, in mark_as_worker_lost
human_status(exitcode)),
WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV).

I am not sure how to troubleshoot this further; if it will help I have copied the relevant code from program_manager/tasks.py:

from __future__ import absolute_import

import csv
import rollbar
from celery import shared_task
from celery.utils.log import get_task_logger

from qscore.models import QualityScore
from integrations.salesforce.prepare import read_csv
from qscore.quality_score import QualityScoreCalculator


logger = get_task_logger(__name__)

@shared_task
def analyze_list_import_program(program):
    program.status = 'RUN'
    program.save()

    df = read_csv(program.csv_file.file)
    try:
        qs = program.get_current_quality_score()
        qs_calc = QualityScoreCalculator(df, qs)
        qscore_data = qs_calc.calculate()
        QualityScore.objects.filter(id=qs.id).update(**qscore_data)
    except Exception as e:
        rollbar.report_exc_info()
        program.status = 'ERROR'
    else:
        program.status = 'COMPL'
    finally:
        program.save()
Joe Fusaro
  • 847
  • 2
  • 11
  • 23
  • I really doubt this has anything to do with celery .Signal 11 means segmentation fault. Try executing the code separately to isolate the issue. You can read more about Segmentation Fault here http://www.cyberciti.biz/tips/segmentation-fault-on-linux-unix.html – station Dec 11 '15 at 17:29

3 Answers3

1

This can be solved by importing python packages in individual task functions, instead of at the top of tasks.py.

I have removed all packages imported at the top of the tasks.py file except importing app from .celery from <project>.celery import app, and then import packages within the individual task functions. And it worked!

neic
  • 145
  • 1
  • 10
1

Just adding my part to this -

I was receiving a very similar issue. But my problem occurred when I switched from Intel Based Chip to Apple Silicon M1 chip.

Here's what I did to make my system work - Changes in Dockerfile - (add --platform=linux/amd64)

FROM --platform=linux/amd64 <base_image>

Change in docker-compose.yml - (add platform: linux/amd64)

db:
    image: postgres:12-alpine
    platform: linux/amd64
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: password
    ports:
      - 5435:5432

Reference - https://www.reddit.com/r/django/comments/o43k2b/developing_django_on_apple_m1/

Sajal Sharma
  • 141
  • 2
  • 9
0

The issue is your Celery task is attempting to unpickle / deserialize the actual Django object program. Pass program_id as parameter to your task and refetch the object in the task itself.

hedleyroos
  • 320
  • 3
  • 9