I have fixture that launches sub-process during testing.
It causes some issues, because, apparently process is killed prematurely.
from multiprocessing import Process
from uvicorn import run
from main import app
# -- snip --
@pytest.fixture(scope="session")
def external_client():
"""Launch application as an independent process"""
config = dict(app=app, host="127.0.0.1", port=7001, workers=1)
p = Process(target=run, kwargs=config)
p.start()
yield
p.kill()
# -- snip --
This process creates postgresql connections. Which are used in other fixtures for teardown/ cleaning database. (yeah...that sounds not right on paper)
When I extended tests I faced to errors like below, which occured inconsitenly.
sqlalchemy.exc.DatabaseError: (psycopg2.DatabaseError) error with status PGRES_TUPLES_OK and no message from the libpq
I tried to follow answer here, and dispose engines explicitly. But it did not help.
I am currently tyring to solve this, and one of my ideas is to kill process only in last fixture, but I do not know how to order them(only if manually trying to track fixture chain and add to last triggered fixture)
So I would like to to something like that:
# -- snip --
@pytest.fixture(scope="session")
def external_client():
"""Launch application as an independent process"""
config = dict(app=app, host="127.0.0.1", port=7001, workers=1)
p = Process(target=run, kwargs=config)
p.start()
yield p
@pytest.fixture()
def kill_process(external_client):
yield
# I want that this code as the last triggered code of all test fixtures.
external_client.kill()
# -- snip --
Is there a way to do it? Or better way to avoid closing process prematurely?