While using py.test, I have some tests that run fine with SQLite but hang silently when I switch to Postgresql. How would I go about debugging something like that? Is there a "verbose" mode I can run my tests in, or set a breakpoint ? More generally, what is the standard plan of attack when pytest stalls silently? I've tried using the pytest-timeout, and ran the test with $ py.test --timeout=300, but the tests still hang with no activity on the screen whatsoever

- 1,720
- 1
- 13
- 35

- 2,251
- 2
- 21
- 26
-
I'd look for some kind of timeout functionality -- I don't know if such a thing is built-in to py.test... – Jesse W at Z - Given up on SE Oct 14 '14 at 00:27
-
2I'm glad you brought this up, because I forgot to mention that i did install the pytest-timeout module, and set it to time out after a 6 seconds, but still the tests hang indefinitely. – Hexatonic Oct 14 '14 at 00:32
-
Have you tried both the `thread` and `signal` timeout methods? Do they both hang the same? Have you been able to isolate a particular test that hangs under PostgreSQL but not SQLite? – Jesse W at Z - Given up on SE Oct 14 '14 at 17:32
-
1As Jesse suggests have you tried the `thread` timeout method of pytest-timeout? If that doesn't help then investigating using `strace` would be my next step. Might also be worth attaching gdb, on modern Linuxes you'll get to see the python stack from inside gdb as well as the C stack. – flub Oct 20 '14 at 12:37
-
No, I did not actually try these techniques, sounds like it's the right approach to finding the problem. I'll let you know how it goes. Thank you. – Hexatonic Apr 28 '15 at 13:47
-
@flub @Hexatonic try running with `py.test -m trace --trace ...` to trace python calls. See answer below. – gaoithe Aug 23 '16 at 10:14
-
Also you can use unix timeout command to enforce timeout `timeout DURATION COMMAND`. – gaoithe Aug 23 '16 at 10:15
8 Answers
I ran into the same SQLite/Postgres problem with Flask and SQLAlchemy, similar to Gordon Fierce. However, my solution was different. Postgres is strict about table locks and connections, so explicitly closing the session connection on teardown solved the problem for me.
My working code:
@pytest.yield_fixture(scope='function')
def db(app):
# app is an instance of a flask app, _db a SQLAlchemy DB
_db.app = app
with app.app_context():
_db.create_all()
yield _db
# Explicitly close DB connection
_db.session.close()
_db.drop_all()
Reference: SQLAlchemy

- 2,356
- 1
- 12
- 8

- 531
- 4
- 9
-
5The newest location of the SQLAlchemy reference is here now: http://docs.sqlalchemy.org/en/latest/faq/metadata_schema.html#my-program-is-hanging-when-i-say-table-drop-metadata-drop-all – Mani Jan 09 '17 at 14:07
-
Does this solution still work? I am still not getting my tests to run. I have added `db.close()` basically everywhere now. I also found `from sqlalchemy.orm.session import close_all_sessions` which also did not solve the problem. – mRcSchwering Apr 14 '21 at 18:13
-
To answer the question "How would I go about debugging something like that?"
Run with py.test -m trace --trace to get trace of python calls.
One option (useful for any stuck unix binary) is to attach to process using
strace -p <PID>
. See what system call it might be stuck on or loop of system calls. e.g. stuck calling gettimeofdayFor more verbose py.test output install pytest-sugar.
pip install pytest-sugar
And run test withpytest.py --verbose . . .
https://pypi.python.org/pypi/pytest-sugar

- 4,218
- 3
- 30
- 38
I had a similar problem with pytest and Postgresql while testing a Flask app that used SQLAlchemy. It seems pytest has a hard time running a teardown using its request.addfinalizer method with Postgresql.
Previously I had:
@pytest.fixture
def db(app, request):
def teardown():
_db.drop_all()
_db.app = app
_db.create_all()
request.addfinalizer(teardown)
return _db
( _db is an instance of SQLAlchemy I import from extensions.py ) But if I drop the database every time the database fixture is called:
@pytest.fixture
def db(app, request):
_db.app = app
_db.drop_all()
_db.create_all()
return _db
Then pytest won't hang after your first test.

- 309
- 3
- 6
-
Had the same problem with Flask, pytest & factory-boy. The above solution fixed the problem. – Burnash Jun 26 '15 at 20:21
Not knowing what is breaking in the code, the best way is to isolate the test that is failing and set a breakpoint in it to have a look. Note: I use pudb instead of pdb, because it's really the best way to debug python if you are not using an IDE.
For example, you can the following in your test file:
import pudb
...
def test_create_product(session):
pudb.set_trace()
# Create the Product instance
# Create a Price instance
# Add the Product instance to the session.
...
Then run it with
py.test -s --capture=no test_my_stuff.py
Now you'll be able to see exactly where the script locks up, and examine the stack and the database at this particular moment of execution. Otherwise it's like looking for a needle in a haystack.

- 2,251
- 2
- 21
- 26
I just ran into this problem for quite some time (though I wasn't using SQLite). The test suite ran fine locally, but failed in CircleCI (Docker).
My problem was ultimately that:
- An object's underlying implementation used threading
- The object's
__del__
normally would end the threads - My test suite wasn't calling
__del__
as it should have
I figured I'd add how I figured this out. Other answers suggest these:
- Found usage of
pytest-timeout
didn't help, the test hung after completion- Invoked via
pytest --timeout 5
- Versions:
pytest==6.2.2, pytest-timeout==1.4.2
- Invoked via
- Running
pytest -m trace --trace
orpytest --verbose
yielded no useful information either
I ended up having to comment literally everything out, including:
- All
conftest.py
code and test code - Slowly uncommented/re-commented regions and identified the root cause
- Ultimate solution: using a factory fixture to add a finalizer to call
__del__

- 3,005
- 9
- 52
- 119
For me the solution to get rid of a hanging test was to use the pytest plugin pytest-xdist (and to run the tests in parallel). I am unsure why that solved it. The reason might be that the plugin runs the tests in (multiple) threads.

- 1,197
- 11
- 17
In my case diff worked very slow on comparing 4 MB data when assert failed.
with open(path, 'rb') as f:
assert f.read() == data
Fixed by:
with open(path, 'rb') as f:
eq = f.read() == data
assert eq

- 1,290
- 17
- 21