I have a long running task that updates some SQLAlchemy objects. A session is opened at the start of the task, updates are made along the way, and the transaction is committed at the end. The problem is that the task runs very long, so the connection will have closed (timed out, "gone away", whatever you want to call it) before the commit is even able to happen. This will cause the commit to fail and the whole task to fail.
This seems absolutely the correct way to do write to a DB to short tasks or non-Celery related things. But it is certainly a problem if the tasks take too long.
Is there some other recommended pattern? Should the Celery task not even utilize the SQLAlchemy objects and instead use some sort of static class whose data can be used to update the actual SQLAlchemy objects, only at the end of the task, maybe? That is the only possible solution I have come up with. I would like to know if there are others or if my idea has other problematic implications.