3

am getting below exception while trying to use multiprocessing with flask sqlalchemy.

sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically.
[12/Aug/2019 18:09:52] "GET /api/resources HTTP/1.1" 500 -
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/SQLAlchemy-1.3.6-py3.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1244, in _execute_context
    cursor, statement, parameters, context
  File "/usr/local/lib/python3.7/site-packages/SQLAlchemy-1.3.6-py3.7-linux-x86_64.egg/sqlalchemy/engine/default.py", line 552, in do_execute
    cursor.execute(statement, parameters)
psycopg2.DatabaseError: error with status PGRES_TUPLES_OK and no message from the libpq

Without multiprocessing the code works perfect, but when i add the multiprocessing as below, am running into this issue.

worker = multiprocessing.Process(target=<target_method_which_has_business_logic_with_DB>, args=(data,), name='PROCESS_ID', daemon=False)
worker.start()
return Response("Request Accepted", status=202)

I see an answer to similar question in SO (https://stackoverflow.com/a/33331954/8085047), which suggests to use engine.dispose(), but in my case am using db.session directly, not creating the engine and scope manually.

Please help to resolve the issue. Thanks!

Lakshman Battini
  • 1,842
  • 11
  • 25
  • 1
    sounds like the worker (or something else) is closing the connection, https://codewithoutrules.com/2018/09/04/python-multiprocessing/ might be relevant to help understand what's going on behind the scenes – Sam Mason Aug 13 '19 at 14:54

2 Answers2

3

I had the same issue. Following Sam's link helped me solve it.

Before I had (not working):

from multiprocessing import Pool
with Pool() as pool:
    pool.map(f, [arg1, arg2, ...])

This works for me:

from multiprocessing import get_context
with get_context("spawn").Pool() as pool:
    pool.map(f, [arg1, arg2, ...])
FriedrichSal
  • 185
  • 2
  • 7
0

The answer from dibrovsd@github was really useful for me. If you are using a PREFORKING server like uwsgi or gunicorn, this would also help you.

Post his comment here for your reference.

Found. This happens when uwsgi (or gunicorn) starts when multiple workers are forked from the first process.
If there is a request in the first process when it starts, then this opens a database connection and the connection is forked to the next process. But in the database, of course, no new connection is opened and a broken connection occurs.
You had to specify lazy: true, lazy-apps: true (uwsgi) or preload_app = False (gunicorn)
In this case, add. workers do not fork, but run themselves and open their normal connections themselves

Refer to link: https://github.com/psycopg/psycopg2/issues/281#issuecomment-985387977

lecranek
  • 11
  • 1