Is it possible to get the results of a test (i.e. whether all assertions have passed) in a tearDown() method? I'm running Selenium scripts, and I'd like to do some reporting from inside tearDown(), however I don't know if this is possible.
-
3What kind of reporting? What exactly are you trying to do? – Falmarri Dec 11 '10 at 00:10
-
5For instance, your test produces intermediate files (that are normally clean ed in tearDown) and you want to collect them if the test fails. – anatoly techtonik Dec 03 '12 at 09:39
-
Name of current test can be retrieved with unittest.id(). So in tearDown you can check self.id(). – gaoithe Jun 13 '16 at 11:23
15 Answers
As of March 2022 this answer is updated to support Python versions between 3.4 and 3.11 (including the newest development Python version). Classification of errors / failures is the same that is used in the output unittest
. It works without any modification of code before tearDown()
. It correctly recognizes decorators skipIf()
and expectedFailure
. It is compatible also with pytest.
Code:
import unittest
class MyTest(unittest.TestCase):
def tearDown(self):
if hasattr(self._outcome, 'errors'):
# Python 3.4 - 3.10 (These two methods have no side effects)
result = self.defaultTestResult()
self._feedErrorsToResult(result, self._outcome.errors)
else:
# Python 3.11+
result = self._outcome.result
ok = all(test != self for test, text in result.errors + result.failures)
# Demo output: (print short info immediately - not important)
if ok:
print('\nOK: %s' % (self.id(),))
for typ, errors in (('ERROR', result.errors), ('FAIL', result.failures)):
for test, text in errors:
if test is self:
# the full traceback is in the variable `text`
msg = [x for x in text.split('\n')[1:]
if not x.startswith(' ')][0]
print("\n\n%s: %s\n %s" % (typ, self.id(), msg))
If you don't need the exception info then the second half can be removed. If you want also the tracebacks then use the whole variable text
instead of msg
. It only can't recognize an unexpected success in a expectedFailure block
Example test methods:
def test_error(self):
self.assertEqual(1 / 0, 1)
def test_fail(self):
self.assertEqual(2, 1)
def test_success(self):
self.assertEqual(1, 1)
Example output:
$ python3 -m unittest test
ERROR: q.MyTest.test_error
ZeroDivisionError: division by zero
E
FAIL: q.MyTest.test_fail
AssertionError: 2 != 1
F
OK: q.MyTest.test_success
.
======================================================================
... skipped the usual output from unittest with tracebacks ...
...
Ran 3 tests in 0.001s
FAILED (failures=1, errors=1)
Complete code including expectedFailure decorator example
EDIT: When I updated this solution to Python 3.11, I dropped everything related to old Python < 3.4 and also many minor notes.

- 14,942
- 6
- 61
- 99
-
1It is the best solution I found so far after 2 days of continuous surfing. – owgitt Jan 02 '17 at 17:25
-
2@ShanthaDodmane: Thanks. I found this solution also after 2 days :-) of reading Python git repository, to verify that it is correct, but too late to get any attention here. – hynekcer Jan 02 '17 at 21:57
-
This really ought to be the accepted answer, it's far more complete and accurate. – Topperfalkon Sep 03 '18 at 11:25
-
Note that with `pytest --pdb`, `self._outcome` can be `None`. If you just want to know whether the last test failed, use something like `last_test_failed = self._outcome and any(exc_info for test_case, exc_info in self._outcome.errors)` – Lekensteyn Jan 03 '19 at 18:20
-
1With Python 3.6, I use `if any(error for test, error in self._outcome.errors): ...`. Here's an even terser version (that I didn't test): `if any([*zip(*self._outcome.errors)][1]): ...`. – Mathieu CAROFF Nov 26 '20 at 09:03
-
I wonder why do you need `exc_list[-1][0] is self` in `list2reason`? This statement is False when there is a subtest and I'm not sure if I could remove it without any consequence. – Bankde Sep 06 '21 at 14:57
-
1@Bankde You can remove that filter for Python >= 3.4. It was important for Python 2.7 because the test results were not cleaned after a test. The same error would remain in `exc_list` also after success and it must be filtered. If you run a `subTest` then you can get its parameters by `exc_list[-1][0].params.maps` with new Python. – hynekcer Sep 22 '21 at 12:08
-
Note that the value of `ok` will be incorrect if the test that failed was using `subTest`. In this case it may be better to check the length of `result.errors + result.failures`. – hlongmore Sep 02 '22 at 20:00
-
1In 3.11, this gives me ```E AttributeError: 'TestCaseFunction' object has no attribute 'errors'``` – ebeezer Jan 06 '23 at 20:52
If you take a look at the implementation of unittest.TestCase.run
, you can see that all test results are collected in the result object (typically a unittest.TestResult
instance) passed as argument. No result status is left in the unittest.TestCase
object.
So there isn't much you can do in the unittest.TestCase.tearDown
method unless you mercilessly break the elegant decoupling of test cases and test results with something like this:
import unittest
class MyTest(unittest.TestCase):
currentResult = None # Holds last result object passed to run method
def setUp(self):
pass
def tearDown(self):
ok = self.currentResult.wasSuccessful()
errors = self.currentResult.errors
failures = self.currentResult.failures
print ' All tests passed so far!' if ok else \
' %d errors and %d failures so far' % \
(len(errors), len(failures))
def run(self, result=None):
self.currentResult = result # Remember result for use in tearDown
unittest.TestCase.run(self, result) # call superclass run method
def test_onePlusOneEqualsTwo(self):
self.assertTrue(1 + 1 == 2) # Succeeds
def test_onePlusOneEqualsThree(self):
self.assertTrue(1 + 1 == 3) # Fails
def test_onePlusNoneIsNone(self):
self.assertTrue(1 + None is None) # Raises TypeError
if __name__ == '__main__':
unittest.main()
This works for Python 2.6 - 3.3 (modified for new Python below).

- 30,738
- 21
- 105
- 131

- 4,638
- 1
- 23
- 27
-
1This works when running directly, but can cause this with `nosetests`: https://stackoverflow.com/questions/11980375/getting-pythons-nosetests-results-in-a-teardown-method – Hugo Jun 05 '14 at 06:09
-
Try `print(self.currentResult)` at the end of `tearDown` and at the end of `run` for this code snippet. For tests with `F`, the `failures` count increments for `print` in `run` but not for `tearDown` it seems. Was this intended? I would want to know in `tearDown` if the unit test that is being "tear down" failed or succeeded. – user3290525 Apr 13 '18 at 16:26
-
1Can we not use `super().run(result)` instead of `unittest.TestCase.run(self, result)`? The first one is more generic and pythonic way – Premkumar chalmeti May 10 '21 at 05:19
CAVEAT: I have no way of double checking the following theory at the moment, being away from a dev box. So this may be a shot in the dark.
Perhaps you could check the return value of sys.exc_info()
inside your tearDown() method, if it returns (None, None, None)
, you know the test case succeeded. Otherwise, you could use returned tuple to interrogate the exception object.
See sys.exc_info documentation.
Another more explicit approach is to write a method decorator that you could slap onto all your test case methods that require this special handling. This decorator can intercept assertion exceptions and based on that modify some state in self
allowing your tearDown method to learn what's up.
@assertion_tracker
def test_foo(self):
# some test logic

- 30,663
- 1
- 34
- 41
-
3Unfortunately this doesn't make the distinction between "errors" and "failures" -- http://docs.python.org/library/unittest.html#organizing-test-code – Purrell Jun 25 '12 at 18:03
-
-
9didn't work for me. `sys.exc_info` was always 3 Nones, even in tests with failures. May be difference with Python3 unittest? – hwjp Oct 30 '13 at 13:01
-
1Using this method, if you are using nose.plugins.skip.SkipTest to mark tests as skipped, skipped tests will be reported as errors, since you `raise SkipTest`. This is probably not what you want in this case. – Clandestine Mar 10 '16 at 23:36
-
2For the curious, this does NOT work for recent python versions. I believe that it breaks at python3.4 (ish). – mgilson Aug 30 '17 at 00:57
-
8@mgilson This solution by `exc_info()` has been broken since Python 3.0 (the latest pre-release of 3.0 or rather 3.1.0 stable in Jun 2009) because the original `sys.exc_info()` is accessible only in the innermost `try: ... except: ...` block in Python 3. It is automatically cleared outside. Module unittest in Python 3.2 to 3.7-dev saves `exc_info()` before leaving the "except" block or converts the important part of exc_info to string in Python 3.0, 3.1. (I verified it now on all aforesaid Python versions.) – hynekcer Aug 31 '17 at 08:07
If you are using Python 2 you can use the method _resultForDoCleanups
. This method return a TextTestResult
object:
<unittest.runner.TextTestResult run=1 errors=0 failures=0>
You can use this object to check the result of your tests:
def tearDown(self):
if self._resultForDoCleanups.failures:
...
elif self._resultForDoCleanups.errors:
...
else:
# Success
If you are using Python 3 you can use _outcomeForDoCleanups
:
def tearDown(self):
if not self._outcomeForDoCleanups.success:
...

- 30,738
- 21
- 105
- 131

- 3,645
- 2
- 17
- 19
-
2`._outcomeForDoCleanups` has gone in 3.4. there is a thing called `._outcome`, but it doesn't seem to expose the test pass/fail state... – hwjp Apr 19 '14 at 22:22
-
1Accessing "private" members is generally frowned upon, and this API can change at any moment. Also: sometimes the `failures` attribute doesn't appear to be set, causing the tearDown to throw an `AttributeError`. – Pieter Dec 06 '16 at 12:00
-
1Bingo... this code is specific to `unittest`. It is NOT compatible with Py.test. – Pieter Dec 06 '16 at 12:31
It depends what kind of reporting you'd like to produce.
In case you'd like to do some actions on failure (such as generating a screenshots), instead of using tearDown()
, you may achieve that by overriding failureException
.
For example:
@property
def failureException(self):
class MyFailureException(AssertionError):
def __init__(self_, *args, **kwargs):
screenshot_dir = 'reports/screenshots'
if not os.path.exists(screenshot_dir):
os.makedirs(screenshot_dir)
self.driver.save_screenshot('{0}/{1}.png'.format(screenshot_dir, self.id()))
return super(MyFailureException, self_).__init__(*args, **kwargs)
MyFailureException.__name__ = AssertionError.__name__
return MyFailureException
-
1This is clever, but I feel like it is a little sketchy. First, `failureException` should accept an argument (see https://docs.python.org/2.7/library/unittest.html#unittest.TestCase.addTypeEqualityFunc). Second, it is documented to be an `Exception` whereas you've replaced it with a function. In principle, that should be OK as long as no other code actually _relies_ on the fact that `failureException` is an exception class (i.e. `raise self.failureException` will now start failing where it would have succeeded before). – mgilson Aug 30 '17 at 11:26
Following on from amatellanes' answer, if you're on Python 3.4, you can't use _outcomeForDoCleanups
. Here's what I managed to hack together:
def _test_has_failed(self):
for method, error in self._outcome.errors:
if error:
return True
return False
It is yucky, but it seems to work.

- 30,738
- 21
- 105
- 131

- 15,359
- 7
- 71
- 70
Here's a solution for those of us who are uncomfortable using solutions that rely on unittest
internals:
First, we create a decorator that will set a flag on the TestCase
instance to determine whether or not the test case failed or passed:
import unittest
import functools
def _tag_error(func):
"""Decorates a unittest test function to add failure information to the TestCase."""
@functools.wraps(func)
def decorator(self, *args, **kwargs):
"""Add failure information to `self` when `func` raises an exception."""
self.test_failed = False
try:
func(self, *args, **kwargs)
except unittest.SkipTest:
raise
except Exception: # pylint: disable=broad-except
self.test_failed = True
raise # re-raise the error with the original traceback.
return decorator
This decorator is actually pretty simple. It relies on the fact that unittest
detects failed tests via Exceptions. As far as I'm aware, the only special exception that needs to be handled is unittest.SkipTest
(which does not indicate a test failure). All other exceptions indicate test failures so we mark them as such when they bubble up to us.
We can now use this decorator directly:
class MyTest(unittest.TestCase):
test_failed = False
def tearDown(self):
super(MyTest, self).tearDown()
print(self.test_failed)
@_tag_error
def test_something(self):
self.fail('Bummer')
It's going to get really annoying writing this decorator all the time. Is there a way we can simplify? Yes there is!* We can write a metaclass to handle applying the decorator for us:
class _TestFailedMeta(type):
"""Metaclass to decorate test methods to append error information to the TestCase instance."""
def __new__(cls, name, bases, dct):
for name, prop in dct.items():
# assume that TestLoader.testMethodPrefix hasn't been messed with -- otherwise, we're hosed.
if name.startswith('test') and callable(prop):
dct[name] = _tag_error(prop)
return super(_TestFailedMeta, cls).__new__(cls, name, bases, dct)
Now we apply this to our base TestCase
subclass and we're all set:
import six # For python2.x/3.x compatibility
class BaseTestCase(six.with_metaclass(_TestFailedMeta, unittest.TestCase)):
"""Base class for all our other tests.
We don't really need this, but it demonstrates that the
metaclass gets applied to all subclasses too.
"""
class MyTest(BaseTestCase):
def tearDown(self):
super(MyTest, self).tearDown()
print(self.test_failed)
def test_something(self):
self.fail('Bummer')
There are likely a number of cases that this doesn't handle properly. For example, it does not correctly detect failed subtests or expected failures. I'd be interested in other failure modes of this, so if you find a case that I'm not handling properly, let me know in the comments and I'll look into it.
*If there wasn't an easier way, I wouldn't have made _tag_error
a private function ;-)

- 300,191
- 65
- 633
- 696
-
Did you tried KeyboardInterrupt exception, expectedFailure decorator and [Distinguishing test iterations using subtests](https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests)? If you don't want to break that, you probably must use more internal names and monkey patch some unittest code. I vote up because basic features of unittest will probably work in every future Python version. Debugging of exceptions is more complicated if they are reraised by decorator. My solution should support every current unittest feature without explicitly enumerate anything. – hynekcer Aug 30 '17 at 13:21
-
@hynekcer -- You're right about `KeyboardInterrupt -- I should be `except Exception` rather than a bare except. And I agree, this _might_ not work properly with subtests. It also might not work properly with expected failures and a few other cases. However, it does work robustly for a wide range of normal cases. – mgilson Aug 31 '17 at 11:45
-
The only verified problem is `expectedFailure` decorator that seems very hard to be fixed in your case. Subtests are problematic only with expectedFailure decorator, but relative easy by intercepting `expectedFailure`. Keyboard interrupt works and the result of test_failed will be anyway never seen after it. I tried your way last year before writing my solution, but I work sometimes on other test decorators and it was terrible to debug them in a combination. On the other hand, it is trivial to verify that a data structure in an undocumented attribute is the same in a new Python. ... – hynekcer Aug 31 '17 at 12:17
-
It works with Python master branch two weeks before Python 3.7 alpha 1. So it has at least two and half year until 3.8 stable. Changes in unittest since 3.5 are minimal. – hynekcer Aug 31 '17 at 12:31
-
@hynekcer -- Yeah, fixing `expectedFailure` is a bugger without relying on the implementation. If you rely on the implementation, it's as simple as checking `func` and test-case for a truthy `__unittest_expecting_failure__` attribute and _not_ setting the failed flag in that case. But of course, the entire point of the answer was to _avoid_ relying on these implementation details :-) – mgilson Aug 31 '17 at 13:06
-
In Python 2.7 to 3.3: expectedFailure works as a decorator that raises either `_ExpectedFailure(sys.exc_info())` or `_UnexpectedSuccess`. These both should be caught in `_tag_error` and also order of decorators is then important. `@_tag_error` must be before `@expectedFailure`. Fortunately if metaclass `_TestFailedMeta` is used, the right order is guaranteed. Internal exception `unittest._ShouldStop` should be caught and ignored in Python 3.4+ due to subTest. – hynekcer Aug 31 '17 at 16:17
-
I like expectedFailure in order to can write a test before a very hard fix or many a temporary `expectedFailureIf(some_package.__version__ >= unsupported_dev_version and QUIET)` (my simple function that returns decorator) because a forgotten unittest.skip could later dangerous. It is also joyful to see that some number of temporary silenced tests is fixed by the way together. – hynekcer Aug 31 '17 at 16:17
I think the proper answer to your question is that there isn't a clean way to get test results in tearDown()
. Most of the answers here involve accessing some private parts of the Python unittest
module and in general feel like workarounds. I'd strongly suggest avoiding these since the test results and test cases are decoupled and you should not work against that.
If you are in love with clean code (like I am) I think what you should do instead is instantiating your TestRunner with your own TestResult class. Then you could add whatever reporting you wanted by overriding these methods:
addError(test, err)
Called when the test case test raises an unexpected exception. err is a tuple of the form returned by sys.exc_info(): (type, value, traceback).
The default implementation appends a tuple (test, formatted_err) to the instance’s errors attribute, where formatted_err is a formatted traceback derived from err.
addFailure(test, err)
Called when the test case test signals a failure. err is a tuple of the form returned by sys.exc_info(): (type, value, traceback).
The default implementation appends a tuple (test, formatted_err) to the instance’s failures attribute, where formatted_err is a formatted traceback derived from err.
addSuccess(test)
Called when the test case test succeeds.
The default implementation does nothing.

- 30,738
- 21
- 105
- 131

- 186
- 5
Python 2.7.
You can also get result after unittest.main():
t = unittest.main(exit=False)
print t.result
Or use suite:
suite.addTests(tests)
result = unittest.result.TestResult()
suite.run(result)
print result

- 30,738
- 21
- 105
- 131

- 123
- 8
Inspired by scoffey’s answer, I decided to take mercilessnes to the next level, and have come up with the following.
It works in both vanilla unittest, and also when run via nosetests, and also works in Python versions 2.7, 3.2, 3.3, and 3.4 (I did not specifically test 3.0, 3.1, or 3.5, as I don’t have these installed at the moment, but if I read the source code correctly, it should work in 3.5 as well):
#! /usr/bin/env python
from __future__ import unicode_literals
import logging
import os
import sys
import unittest
# Log file to see squawks during testing
formatter = logging.Formatter(fmt='%(levelname)-8s %(name)s: %(message)s')
log_file = os.path.splitext(os.path.abspath(__file__))[0] + '.log'
handler = logging.FileHandler(log_file)
handler.setFormatter(formatter)
logging.root.addHandler(handler)
logging.root.setLevel(logging.DEBUG)
log = logging.getLogger(__name__)
PY = tuple(sys.version_info)[:3]
class SmartTestCase(unittest.TestCase):
"""Knows its state (pass/fail/error) by the time its tearDown is called."""
def run(self, result):
# Store the result on the class so tearDown can behave appropriately
self.result = result.result if hasattr(result, 'result') else result
if PY >= (3, 4, 0):
self._feedErrorsToResultEarly = self._feedErrorsToResult
self._feedErrorsToResult = lambda *args, **kwargs: None # no-op
super(SmartTestCase, self).run(result)
@property
def errored(self):
if (3, 0, 0) <= PY < (3, 4, 0):
return bool(self._outcomeForDoCleanups.errors)
return self.id() in [case.id() for case, _ in self.result.errors]
@property
def failed(self):
if (3, 0, 0) <= PY < (3, 4, 0):
return bool(self._outcomeForDoCleanups.failures)
return self.id() in [case.id() for case, _ in self.result.failures]
@property
def passed(self):
return not (self.errored or self.failed)
def tearDown(self):
if PY >= (3, 4, 0):
self._feedErrorsToResultEarly(self.result, self._outcome.errors)
class TestClass(SmartTestCase):
def test_1(self):
self.assertTrue(True)
def test_2(self):
self.assertFalse(True)
def test_3(self):
self.assertFalse(False)
def test_4(self):
self.assertTrue(False)
def test_5(self):
self.assertHerp('Derp')
def tearDown(self):
super(TestClass, self).tearDown()
log.critical('---- RUNNING {} ... -----'.format(self.id()))
if self.errored:
log.critical('----- ERRORED -----')
elif self.failed:
log.critical('----- FAILED -----')
else:
log.critical('----- PASSED -----')
if __name__ == '__main__':
unittest.main()
When run with unittest
:
$ ./test.py -v
test_1 (__main__.TestClass) ... ok
test_2 (__main__.TestClass) ... FAIL
test_3 (__main__.TestClass) ... ok
test_4 (__main__.TestClass) ... FAIL
test_5 (__main__.TestClass) ... ERROR
[…]
$ cat ./test.log
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
CRITICAL __main__: ----- PASSED -----
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
CRITICAL __main__: ----- FAILED -----
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
CRITICAL __main__: ----- PASSED -----
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
CRITICAL __main__: ----- FAILED -----
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
CRITICAL __main__: ----- ERRORED -----
When run with nosetests
:
$ nosetests ./test.py -v
test_1 (test.TestClass) ... ok
test_2 (test.TestClass) ... FAIL
test_3 (test.TestClass) ... ok
test_4 (test.TestClass) ... FAIL
test_5 (test.TestClass) ... ERROR
$ cat ./test.log
CRITICAL test: ---- RUNNING test.TestClass.test_1 ... -----
CRITICAL test: ----- PASSED -----
CRITICAL test: ---- RUNNING test.TestClass.test_2 ... -----
CRITICAL test: ----- FAILED -----
CRITICAL test: ---- RUNNING test.TestClass.test_3 ... -----
CRITICAL test: ----- PASSED -----
CRITICAL test: ---- RUNNING test.TestClass.test_4 ... -----
CRITICAL test: ----- FAILED -----
CRITICAL test: ---- RUNNING test.TestClass.test_5 ... -----
CRITICAL test: ----- ERRORED -----
Background
I started with this:
class SmartTestCase(unittest.TestCase):
"""Knows its state (pass/fail/error) by the time its tearDown is called."""
def run(self, result):
# Store the result on the class so tearDown can behave appropriately
self.result = result.result if hasattr(result, 'result') else result
super(SmartTestCase, self).run(result)
@property
def errored(self):
return self.id() in [case.id() for case, _ in self.result.errors]
@property
def failed(self):
return self.id() in [case.id() for case, _ in self.result.failures]
@property
def passed(self):
return not (self.errored or self.failed)
However, this only works in Python 2. In Python 3, up to and including 3.3, the control flow appears to have changed a bit: Python 3’s unittest package processes results after calling each test’s tearDown()
method… this behavior can be confirmed if we simply add an extra line (or six) to our test class:
@@ -63,6 +63,12 @@
log.critical('----- FAILED -----')
else:
log.critical('----- PASSED -----')
+ log.warning(
+ 'ERRORS THUS FAR:\n'
+ + '\n'.join(tc.id() for tc, _ in self.result.errors))
+ log.warning(
+ 'FAILURES THUS FAR:\n'
+ + '\n'.join(tc.id() for tc, _ in self.result.failures))
if __name__ == '__main__':
Then just rerun the tests:
$ python3.3 ./test.py -v
test_1 (__main__.TestClass) ... ok
test_2 (__main__.TestClass) ... FAIL
test_3 (__main__.TestClass) ... ok
test_4 (__main__.TestClass) ... FAIL
test_5 (__main__.TestClass) ... ERROR
[…]
…and you will see that you get this as a result:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
__main__.TestClass.test_4
Now, compare the above to Python 2’s output:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_1 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_2 ... -----
CRITICAL __main__: ----- FAILED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_3 ... -----
CRITICAL __main__: ----- PASSED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_4 ... -----
CRITICAL __main__: ----- FAILED -----
WARNING __main__: ERRORS THUS FAR:
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
__main__.TestClass.test_4
CRITICAL __main__: ---- RUNNING __main__.TestClass.test_5 ... -----
CRITICAL __main__: ----- ERRORED -----
WARNING __main__: ERRORS THUS FAR:
__main__.TestClass.test_5
WARNING __main__: FAILURES THUS FAR:
__main__.TestClass.test_2
__main__.TestClass.test_4
Since Python 3 processes errors/failures after the test is torn down, we can’t readily infer the result of a test using result.errors
or result.failures
in every case. (I think it probably makes more sense architecturally to process a test’s results after tearing it down, however, it does make the perfectly valid use-case of following a different end-of-test procedure depending on a test’s pass/fail status a bit harder to meet…)
Therefore, instead of relying on the overall result
object, instead we can reference _outcomeForDoCleanups
as others have already mentioned, which contains the result object for the currently running test, and has the necessary errors
and failrues
attributes, which we can use to infer a test’s status by the time tearDown()
has been called:
@@ -3,6 +3,7 @@
from __future__ import unicode_literals
import logging
import os
+import sys
import unittest
@@ -16,6 +17,9 @@
log = logging.getLogger(__name__)
+PY = tuple(sys.version_info)[:3]
+
+
class SmartTestCase(unittest.TestCase):
"""Knows its state (pass/fail/error) by the time its tearDown is called."""
@@ -27,10 +31,14 @@
@property
def errored(self):
+ if PY >= (3, 0, 0):
+ return bool(self._outcomeForDoCleanups.errors)
return self.id() in [case.id() for case, _ in self.result.errors]
@property
def failed(self):
+ if PY >= (3, 0, 0):
+ return bool(self._outcomeForDoCleanups.failures)
return self.id() in [case.id() for case, _ in self.result.failures]
@property
This adds support for the early versions of Python 3.
As of Python 3.4, however, this private member variable no longer exists, and instead, a new (albeit also private) method was added: _feedErrorsToResult
.
This means that for versions 3.4 (and later), if the need is great enough, one can — very hackishly — force one’s way in to make it all work again like it did in version 2…
@@ -27,17 +27,20 @@
def run(self, result):
# Store the result on the class so tearDown can behave appropriately
self.result = result.result if hasattr(result, 'result') else result
+ if PY >= (3, 4, 0):
+ self._feedErrorsToResultEarly = self._feedErrorsToResult
+ self._feedErrorsToResult = lambda *args, **kwargs: None # no-op
super(SmartTestCase, self).run(result)
@property
def errored(self):
- if PY >= (3, 0, 0):
+ if (3, 0, 0) <= PY < (3, 4, 0):
return bool(self._outcomeForDoCleanups.errors)
return self.id() in [case.id() for case, _ in self.result.errors]
@property
def failed(self):
- if PY >= (3, 0, 0):
+ if (3, 0, 0) <= PY < (3, 4, 0):
return bool(self._outcomeForDoCleanups.failures)
return self.id() in [case.id() for case, _ in self.result.failures]
@@ -45,6 +48,10 @@
def passed(self):
return not (self.errored or self.failed)
+ def tearDown(self):
+ if PY >= (3, 4, 0):
+ self._feedErrorsToResultEarly(self.result, self._outcome.errors)
+
class TestClass(SmartTestCase):
@@ -64,6 +71,7 @@
self.assertHerp('Derp')
def tearDown(self):
+ super(TestClass, self).tearDown()
log.critical('---- RUNNING {} ... -----'.format(self.id()))
if self.errored:
log.critical('----- ERRORED -----')
…provided, of course, all consumers of this class remember to super(…, self).tearDown()
in their respective tearDown
methods…
Disclaimer: This is purely educational, don’t try this at home, etc. etc. etc. I’m not particularly proud of this solution, but it seems to work well enough for the time being, and is the best I could hack up after fiddling for an hour or two on a Saturday afternoon…

- 30,738
- 21
- 105
- 131

- 2,849
- 2
- 15
- 15
-
1+1 but: You should not add to the object self.result directly, otherwise you get the failures from test methods reported twice or you lose eventually errors from tearDown if something goes wrong. A new temporary result object only for the errors from the test method is a solution. (I did not see this end of screen, until I began to write an answer.) – hynekcer Sep 21 '16 at 01:21
The name of the current test can be retrieved with the unittest.TestCase.id() method. So in tearDown you can check self.id().
The example shows how to:
- find if the current test has an error or failure in errors or failures list
- print the test id with PASS or FAIL or EXCEPTION
The tested example here works with scoffey's nice example.
def tearDown(self):
result = "PASS"
#### Find and show result for current test
# I did not find any nicer/neater way of comparing self.id() with test id stored in errors or failures lists :-7
id = str(self.id()).split('.')[-1]
# id() e.g. tup[0]:<__main__.MyTest testMethod=test_onePlusNoneIsNone>
# str(tup[0]):"test_onePlusOneEqualsThree (__main__.MyTest)"
# str(self.id()) = __main__.MyTest.test_onePlusNoneIsNone
for tup in self.currentResult.failures:
if str(tup[0]).startswith(id):
print ' test %s failure:%s' % (self.id(), tup[1])
## DO TEST FAIL ACTION HERE
result = "FAIL"
for tup in self.currentResult.errors:
if str(tup[0]).startswith(id):
print ' test %s error:%s' % (self.id(), tup[1])
## DO TEST EXCEPTION ACTION HERE
result = "EXCEPTION"
print "Test:%s Result:%s" % (self.id(), result)
Example of result:
python run_scripts/tut2.py 2>&1
E test __main__.MyTest.test_onePlusNoneIsNone error:Traceback (most recent call last):
File "run_scripts/tut2.py", line 80, in test_onePlusNoneIsNone
self.assertTrue(1 + None is None) # raises TypeError
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
Test:__main__.MyTest.test_onePlusNoneIsNone Result:EXCEPTION
F test __main__.MyTest.test_onePlusOneEqualsThree failure:Traceback (most recent call last):
File "run_scripts/tut2.py", line 77, in test_onePlusOneEqualsThree
self.assertTrue(1 + 1 == 3) # fails
AssertionError: False is not true
Test:__main__.MyTest.test_onePlusOneEqualsThree Result:FAIL
Test:__main__.MyTest.test_onePlusOneEqualsTwo Result:PASS
.
======================================================================
ERROR: test_onePlusNoneIsNone (__main__.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "run_scripts/tut2.py", line 80, in test_onePlusNoneIsNone
self.assertTrue(1 + None is None) # raises TypeError
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
======================================================================
FAIL: test_onePlusOneEqualsThree (__main__.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "run_scripts/tut2.py", line 77, in test_onePlusOneEqualsThree
self.assertTrue(1 + 1 == 3) # fails
AssertionError: False is not true
----------------------------------------------------------------------
Ran 3 tests in 0.001s
FAILED (failures=1, errors=1)

- 30,738
- 21
- 105
- 131

- 4,218
- 3
- 30
- 38
-
-
Put a try catch around the currentResult access. Under what circumstances are you getting this? Test ending normally or abort/exception? What unittest library and version? – gaoithe Aug 31 '20 at 10:33
Tested for Python 3.7 - sample code for getting information of failing assertions, but can give an idea of how to deal with errors:
def tearDown(self):
if self._outcome.errors[1][1] and hasattr(self._outcome.errors[1][1][1], 'actual'):
print(self._testMethodName)
print(self._outcome.errors[1][1][1].actual)
print(self._outcome.errors[1][1][1].expected)

- 30,738
- 21
- 105
- 131

- 22,778
- 19
- 100
- 117
In a few words, this gives True
if all tests run so far exited with no errors or failures:
class WatheverTestCase(TestCase):
def tear_down(self):
return not self._outcome.result.errors and not self._outcome.result.failures
Explore _outcome
's properties to access more detailed possibilities.

- 30,738
- 21
- 105
- 131

- 111
- 6
This is simple, makes use of the public API only, and shall work on any python version:
import unittest
class MyTest(unittest.TestCase):
def defaultTestResult():
self.lastResult = unittest.result.TestResult()
return self.lastResult
...

- 2,858
- 23
- 22
-
This doesn't work. You probably omitted `self` in `def defaultTestResult():`. The result of `unittest.result.TestResult()` is an empty test result. The attribute `lastResult` that has been assigned by `defaultTestResult` is not accessible in `tearDown`. – hynekcer Jun 24 '22 at 18:03
Python version independent code using global variable
import unittest
global test_case_id
global test_title
global test_result
test_case_id =''
test_title = ''
test_result = ''
class Dummy(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
global test_case_id
global test_title
global test_result
self.test_case_id = test_case_id
self.test_title = test_title
self.test_result = test_result
print('Test case id is : ',self.test_case_id)
print('test title is : ',self.test_title)
print('Test test result is : ',self.test_result)
def test_a(self):
global test_case_id
global test_title
global test_result
test_case_id = 'test1'
test_title = 'To verify test1'
test_result=self.assertTrue(True)
def test_b(self):
global test_case_id
global test_title
global test_result
test_case_id = 'test2'
test_title = 'To verify test2'
test_result=self.assertFalse(False)
if __name__ == "__main__":
unittest.main()

- 17,741
- 7
- 42
- 75

- 128
- 1
- 8