98

EDIT: switched to a better example, and clarified why this is a real problem.

I'd like to write unit tests in Python that continue executing when an assertion fails, so that I can see multiple failures in a single test. For example:

class Car(object):
  def __init__(self, make, model):
    self.make = make
    self.model = make  # Copy and paste error: should be model.
    self.has_seats = True
    self.wheel_count = 3  # Typo: should be 4.

class CarTest(unittest.TestCase):
  def test_init(self):
    make = "Ford"
    model = "Model T"
    car = Car(make=make, model=model)
    self.assertEqual(car.make, make)
    self.assertEqual(car.model, model)  # Failure!
    self.assertTrue(car.has_seats)
    self.assertEqual(car.wheel_count, 4)  # Failure!

Here, the purpose of the test is to ensure that Car's __init__ sets its fields correctly. I could break it up into four methods (and that's often a great idea), but in this case I think it's more readable to keep it as a single method that tests a single concept ("the object is initialized correctly").

If we assume that it's best here to not break up the method, then I have a new problem: I can't see all of the errors at once. When I fix the model error and re-run the test, then the wheel_count error appears. It would save me time to see both errors when I first run the test.

For comparison, Google's C++ unit testing framework distinguishes between between non-fatal EXPECT_* assertions and fatal ASSERT_* assertions:

The assertions come in pairs that test the same thing but have different effects on the current function. ASSERT_* versions generate fatal failures when they fail, and abort the current function. EXPECT_* versions generate nonfatal failures, which don't abort the current function. Usually EXPECT_* are preferred, as they allow more than one failures to be reported in a test. However, you should use ASSERT_* if it doesn't make sense to continue when the assertion in question fails.

Is there a way to get EXPECT_*-like behavior in Python's unittest? If not in unittest, then is there another Python unit test framework that does support this behavior?


Incidentally, I was curious about how many real-life tests might benefit from non-fatal assertions, so I looked at some code examples (edited 2014-08-19 to use searchcode instead of Google Code Search, RIP). Out of 10 randomly selected results from the first page, all contained tests that made multiple independent assertions in the same test method. All would benefit from non-fatal assertions.

Bruce Christensen
  • 1,564
  • 1
  • 11
  • 12
  • 2
    What did you end up doing? I'm interested in this topic (for completely different reasons which I'd be happy to discuss on a more spacious place than a comment) and would like to know your experience. By the way, the "code examples" link ends up with "Sadly, this service has been shut down", so if you have a cached version of that I'd be interested to see it too. – Davide Aug 20 '12 at 01:39
  • For future reference, I believe [this](https://code.google.com/hosting/search?q=%22import+unittest%22+unittest.testcase+self.assertEqual&projectsearch=Search+projects) is the equivalent search on the current system, but the results are no longer as described above. – ZAD-Man Jun 10 '14 at 16:57
  • 2
    @Davide, I didn't end up doing anything. The "only make one assertion per method" approach seems too rigidly dogmatic to me, but the only workable (and maintainable) solution seems to be Anthony's "catch and append" suggestion. That's too ugly for me, though, so I just stuck with multiple asserts per method, and I'll have to live with running tests more times than needed to find all failures. – Bruce Christensen Aug 19 '14 at 21:02
  • The python testing framework called **PyTest** is quite intuitive, and by default shows all the assert failures. That could be a work-around to the problem you're facing. – Surya Shekhar Chakraborty Aug 08 '18 at 05:14

13 Answers13

51

Another way to have non-fatal assertions is to capture the assertion exception and store the exceptions in a list. Then assert that that list is empty as part of the tearDown.

import unittest

class Car(object):
  def __init__(self, make, model):
    self.make = make
    self.model = make  # Copy and paste error: should be model.
    self.has_seats = True
    self.wheel_count = 3  # Typo: should be 4.

class CarTest(unittest.TestCase):
  def setUp(self):
    self.verificationErrors = []

  def tearDown(self):
    self.assertEqual([], self.verificationErrors)

  def test_init(self):
    make = "Ford"
    model = "Model T"
    car = Car(make=make, model=model)
    try: self.assertEqual(car.make, make)
    except AssertionError, e: self.verificationErrors.append(str(e))
    try: self.assertEqual(car.model, model)  # Failure!
    except AssertionError, e: self.verificationErrors.append(str(e))
    try: self.assertTrue(car.has_seats)
    except AssertionError, e: self.verificationErrors.append(str(e))
    try: self.assertEqual(car.wheel_count, 4)  # Failure!
    except AssertionError, e: self.verificationErrors.append(str(e))

if __name__ == "__main__":
    unittest.main()
  • 2
    Pretty sure I agree with you. That's how Selenium deals with verification errors in the python backend. – Anthony Batchelor Feb 23 '11 at 15:00
  • Ya, the problem with this solution is that all asserts are counted as errors (not failuers) and the way as to render the errors is not really usable. Anyway is a way and the render function can be improved easly – eMarine Apr 24 '15 at 09:41
  • I'm using this solution in combination with [dietbudda's answer](http://stackoverflow.com/questions/4732827/continuing-in-pythons-unittest-when-an-assertion-fails/4744463#4744463) by overriding all assertions in `unittest.TestCase` with try / except blocks. – thodic May 26 '15 at 09:53
  • For complex test patterns this is the best solution to beat unittest error, but it makes the test look rather ugly with all the try/excepts . it is a tradoff between lots of tests and a complex single test. I've started returning an error dict instead. So I can test entire testpattern in one test and keep readability for my fellow casual python developers. – MortenB Jan 05 '18 at 09:22
  • This is extremely clever, so hats off to you. – courtsimas Nov 20 '18 at 04:13
50

Since Python 3.4 you can also use subtests:

def test_init(self):
    make = "Ford"
    model = "Model T"
    car = Car(make=make, model=model)
    with self.subTest(msg='Car.make check'):
        self.assertEqual(car.make, make)
    with self.subTest(msg='Car.model check'):
        self.assertEqual(car.model, model)
    with self.subTest(msg='Car.has_seats check'):
        self.assertTrue(car.has_seats)
    with self.subTest(msg='Car.wheel_count check'):
        self.assertEqual(car.wheel_count, 4)

(msg parameter is used to more easily determine which test failed.)

Output:

======================================================================
FAIL: test_init (__main__.CarTest) [Car.model check]
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test.py", line 23, in test_init
    self.assertEqual(car.model, model)
AssertionError: 'Ford' != 'Model T'
- Ford
+ Model T


======================================================================
FAIL: test_init (__main__.CarTest) [Car.wheel_count check]
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test.py", line 27, in test_init
    self.assertEqual(car.wheel_count, 4)
AssertionError: 3 != 4

----------------------------------------------------------------------
Ran 1 test in 0.001s

FAILED (failures=2)
Zuku
  • 1,030
  • 13
  • 21
34

One option is assert on all the values at once as a tuple.

For example:

class CarTest(unittest.TestCase):
  def test_init(self):
    make = "Ford"
    model = "Model T"
    car = Car(make=make, model=model)
    self.assertEqual(
            (car.make, car.model, car.has_seats, car.wheel_count),
            (make, model, True, 4))

The output from this tests would be:

======================================================================
FAIL: test_init (test.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\temp\py_mult_assert\test.py", line 17, in test_init
    (make, model, True, 4))
AssertionError: Tuples differ: ('Ford', 'Ford', True, 3) != ('Ford', 'Model T', True, 4)

First differing element 1:
Ford
Model T

- ('Ford', 'Ford', True, 3)
?           ^ -          ^

+ ('Ford', 'Model T', True, 4)
?           ^  ++++         ^

This shows that both the model and the wheel count are incorrect.

hwiechers
  • 14,583
  • 8
  • 53
  • 62
9

What you'll probably want to do is derive unittest.TestCase since that's the class that throws when an assertion fails. You will have to re-architect your TestCase to not throw (maybe keep a list of failures instead). Re-architecting stuff can cause other issues that you would have to resolve. For example you may end up needing to derive TestSuite to make changes in support of the changes made to your TestCase.

dietbuddha
  • 8,556
  • 1
  • 30
  • 34
  • 1
    I figured that this would probably be the eventual answer, but I wanted to cover my bases and see if I was missing anything. Thanks! – Bruce Christensen Jan 20 '11 at 18:10
  • 4
    I'd say it is an overkill to override `TestCase` for the sake of implementing soft assertions--they are especially easy to make in python: just catch all your `AssertionError`s (maybe in a simple loop), and store them in a list or a set, then fail them all at once. Check out @Anthony Batchelor's answer for specifics. – dcsordas Nov 08 '13 at 16:34
  • 2
    @dscordas Depends on if this is for a one off test or if you want to have this ability for most tests. – dietbuddha Nov 09 '13 at 04:14
6

It is considered an anti-pattern to have multiple asserts in a single unit test. A single unit test is expected to test only one thing. Perhaps you are testing too much. Consider splitting this test up into multiple tests. This way you can name each test properly.

Sometimes however, it is okay to check multiple things at the same time. For instance when you are asserting properties of the same object. In that case you are in fact asserting whether that object is correct. A way to do this is to write a custom helper method that knows how to assert on that object. You can write that method in such a way that it shows all failing properties or for instance shows the complete state of the expected object and the complete state of the actual object when an assert fails.

Tadeck
  • 132,510
  • 28
  • 152
  • 198
Steven
  • 166,672
  • 24
  • 332
  • 435
  • I agree that it's great to make tests as fine-grained as possible, but no finer. :) I'm looking for better solutions for the second situation you mention, where you really do want to check multiple things at once (as in the new example I that I added). I could write a custom helper, but the natural way to do so seems to be to use unittest.TestCase's assert* methods directly! (except that I can't since they're fatal) – Bruce Christensen Jan 19 '11 at 18:14
  • 1
    @Bruce: An assert should fail or succeed. Never something in between. Test should be trustworthy, readable, and maintainable. An failing assert that does not fail the test is a bad idea. It makes your tests overly complicated (which lowers readability and maintainability) and having tests that are 'allowed to fail' makes it easy to ignore them, which means they are not trustworthy. – Steven Jan 19 '11 at 20:49
  • 8
    any reason why the rest of the test can't run and it still be fatal. I would think you could delay the return of the failure somewhere in favor of aggregating all the possible failures that may occur. – dietbuddha Jan 20 '11 at 07:10
  • 5
    I think we're both saying the same thing. I want every failing assert to cause the test to fail; it's just that I want the failure to occur when the test method returns, rather than immediately when the assert is tested, as @dietbuddha mentioned. This would allow *all* of the asserts in the method to be tested, so that I can see (and fix) all failures in one shot. The test is still trustworthy, readable, and maintainable (even more so, actually). – Bruce Christensen Jan 20 '11 at 17:24
  • 11
    He's not saying the test shouldn't fail when you hit the assert, he's saying the failure shouldn't prevent the other checks. For example, right now I'm testing that particular directories are user, group, and other writable. Each is a separate assert. It would be useful to know from the test output that all three cases are failing, so I can fix them with one chmod call, rather than getting "Path is not user-writable," having to run the test again to get "Path is not group-writable" and so on. Although I guess I just argued that they should be separate tests... – Tim Keating Aug 25 '11 at 23:20
  • 8
    Just because the library is called unittest, it doesn't mean that the test is an isolated unit test. The unittest module, as well as pytest and nose and others, work great for system tests, integration tests, etc. With the one caveat being that you can only fail once. It's annoying really. I'd really like to see all of the assert functions either add a parameter that allows you to continue with a failure, or a duplication of the assert functions called expectBlah, that do such a thing. Then it would be way easier to write larger functional tests with unittest. – Okken Dec 15 '14 at 22:38
6

There is a soft assertion package in PyPI called softest that will handle your requirements. It works by collecting the failures, combining exception and stack trace data, and reporting it all as part of the usual unittest output.

For instance, this code:

import softest

class ExampleTest(softest.TestCase):
    def test_example(self):
        # be sure to pass the assert method object, not a call to it
        self.soft_assert(self.assertEqual, 'Worf', 'wharf', 'Klingon is not ship receptacle')
        # self.soft_assert(self.assertEqual('Worf', 'wharf', 'Klingon is not ship receptacle')) # will not work as desired
        self.soft_assert(self.assertTrue, True)
        self.soft_assert(self.assertTrue, False)

        self.assert_all()

if __name__ == '__main__':
    softest.main()

...produces this console output:

======================================================================
FAIL: "test_example" (ExampleTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\...\softest_test.py", line 14, in test_example
    self.assert_all()
  File "C:\...\softest\case.py", line 138, in assert_all
    self.fail(''.join(failure_output))
AssertionError: ++++ soft assert failure details follow below ++++

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The following 2 failures were found in "test_example" (ExampleTest):
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Failure 1 ("test_example" method)
+--------------------------------------------------------------------+
Traceback (most recent call last):
  File "C:\...\softest_test.py", line 10, in test_example
    self.soft_assert(self.assertEqual, 'Worf', 'wharf', 'Klingon is not ship receptacle')
  File "C:\...\softest\case.py", line 84, in soft_assert
    assert_method(*arguments, **keywords)
  File "C:\...\Python\Python36-32\lib\unittest\case.py", line 829, in assertEqual
    assertion_func(first, second, msg=msg)
  File "C:\...\Python\Python36-32\lib\unittest\case.py", line 1203, in assertMultiLineEqual
    self.fail(self._formatMessage(msg, standardMsg))
  File "C:\...\Python\Python36-32\lib\unittest\case.py", line 670, in fail
    raise self.failureException(msg)
AssertionError: 'Worf' != 'wharf'
- Worf
+ wharf
 : Klingon is not ship receptacle

+--------------------------------------------------------------------+
Failure 2 ("test_example" method)
+--------------------------------------------------------------------+
Traceback (most recent call last):
  File "C:\...\softest_test.py", line 12, in test_example
    self.soft_assert(self.assertTrue, False)
  File "C:\...\softest\case.py", line 84, in soft_assert
    assert_method(*arguments, **keywords)
  File "C:\...\Python\Python36-32\lib\unittest\case.py", line 682, in assertTrue
    raise self.failureException(msg)
AssertionError: False is not true


----------------------------------------------------------------------
Ran 1 test in 0.000s

FAILED (failures=1)

NOTE: I created and maintain softest.

nikodaemus
  • 1,918
  • 3
  • 21
  • 32
4

expect is very useful in gtest. This is python way in gist, and code:

import sys
import unittest


class TestCase(unittest.TestCase):
    def run(self, result=None):
        if result is None:
            self.result = self.defaultTestResult()
        else:
            self.result = result

        return unittest.TestCase.run(self, result)

    def expect(self, val, msg=None):
        '''
        Like TestCase.assert_, but doesn't halt the test.
        '''
        try:
            self.assert_(val, msg)
        except:
            self.result.addFailure(self, sys.exc_info())

    def expectEqual(self, first, second, msg=None):
        try:
            self.failUnlessEqual(first, second, msg)
        except:
            self.result.addFailure(self, sys.exc_info())

    expect_equal = expectEqual

    assert_equal = unittest.TestCase.assertEqual
    assert_raises = unittest.TestCase.assertRaises


test_main = unittest.main
Ken
  • 1,141
  • 12
  • 12
4

Do each assert in a separate method.

class MathTest(unittest.TestCase):
  def test_addition1(self):
    self.assertEqual(1 + 0, 1)

  def test_addition2(self):
    self.assertEqual(1 + 1, 3)

  def test_addition3(self):
    self.assertEqual(1 + (-1), 0)

  def test_addition4(self):
    self.assertEqaul(-1 + (-1), -1)
Lennart Regebro
  • 167,292
  • 41
  • 224
  • 251
  • 7
    I realize that that's one possible solution, but it's not always practical. I'm looking for something that works without breaking up one formerly-cohesive test into several little methods. – Bruce Christensen Jan 19 '11 at 17:29
  • @Bruce Christensen: If they are so cohesive then perhaps they form a story? And then they can be made into doctests, which indeed *will* continue even after failure. – Lennart Regebro Jan 19 '11 at 18:17
  • 1
    I have a set of tests, something like this: 1. load data, 2. assert data loaded correctly, 3. modify data, 4. assert modification worked correctly, 5. save modified data, 6. assert data saved correctly. How can I do that with this method? it doesn't make sense to load the data in `setup()`, because that's one of the tests. But if I put each assertion into its own function, then I have to load data 3 times, and that's a huge waste of resources. What's the best way to deal with a situation like that? – naught101 Jan 19 '15 at 07:58
  • Well, tests that test a specific sequence should be in the same test method. – Lennart Regebro Jan 19 '15 at 14:55
2

I liked the approach by @Anthony-Batchelor, to capture the AssertionError exception. But a slight variation to this approach using decorators and also a way to report the tests cases with pass/fail.

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import unittest

class UTReporter(object):
    '''
    The UT Report class keeps track of tests cases
    that have been executed.
    '''
    def __init__(self):
        self.testcases = []
        print "init called"

    def add_testcase(self, testcase):
        self.testcases.append(testcase)

    def display_report(self):
        for tc in self.testcases:
            msg = "=============================" + "\n" + \
                "Name: " + tc['name'] + "\n" + \
                "Description: " + str(tc['description']) + "\n" + \
                "Status: " + tc['status'] + "\n"
            print msg

reporter = UTReporter()

def assert_capture(*args, **kwargs):
    '''
    The Decorator defines the override behavior.
    unit test functions decorated with this decorator, will ignore
    the Unittest AssertionError. Instead they will log the test case
    to the UTReporter.
    '''
    def assert_decorator(func):
        def inner(*args, **kwargs):
            tc = {}
            tc['name'] = func.__name__
            tc['description'] = func.__doc__
            try:
                func(*args, **kwargs)
                tc['status'] = 'pass'
            except AssertionError:
                tc['status'] = 'fail'
            reporter.add_testcase(tc)
        return inner
    return assert_decorator



class DecorateUt(unittest.TestCase):

    @assert_capture()
    def test_basic(self):
        x = 5
        self.assertEqual(x, 4)

    @assert_capture()
    def test_basic_2(self):
        x = 4
        self.assertEqual(x, 4)

def main():
    #unittest.main()
    suite = unittest.TestLoader().loadTestsFromTestCase(DecorateUt)
    unittest.TextTestRunner(verbosity=2).run(suite)

    reporter.display_report()


if __name__ == '__main__':
    main()

Output from console:

(awsenv)$ ./decorators.py 
init called
test_basic (__main__.DecorateUt) ... ok
test_basic_2 (__main__.DecorateUt) ... ok

----------------------------------------------------------------------
Ran 2 tests in 0.000s

OK
=============================
Name: test_basic
Description: None
Status: fail

=============================
Name: test_basic_2
Description: None
Status: pass
Zoro_77
  • 397
  • 1
  • 4
  • 16
1

I had a problem with the answer from @Anthony Batchelor because it would have forced me to use try...catch inside my unit tests. Instead, I encapsulated the try...catch logic in an override of the TestCase.assertEqual method. Here is the code:

import unittest
import traceback

class AssertionErrorData(object):

    def __init__(self, stacktrace, message):
        super(AssertionErrorData, self).__init__()
        self.stacktrace = stacktrace
        self.message = message

class MultipleAssertionFailures(unittest.TestCase):

    def __init__(self, *args, **kwargs):
        self.verificationErrors = []
        super(MultipleAssertionFailures, self).__init__( *args, **kwargs )

    def tearDown(self):
        super(MultipleAssertionFailures, self).tearDown()

        if self.verificationErrors:
            index = 0
            errors = []

            for error in self.verificationErrors:
                index += 1
                errors.append( "%s\nAssertionError %s: %s" % ( 
                        error.stacktrace, index, error.message ) )

            self.fail( '\n\n' + "\n".join( errors ) )
            self.verificationErrors.clear()

    def assertEqual(self, goal, results, msg=None):

        try:
            super( MultipleAssertionFailures, self ).assertEqual( goal, results, msg )

        except unittest.TestCase.failureException as error:
            goodtraces = self._goodStackTraces()
            self.verificationErrors.append( 
                    AssertionErrorData( "\n".join( goodtraces[:-2] ), error ) )

    def _goodStackTraces(self):
        """
            Get only the relevant part of stacktrace.
        """
        stop = False
        found = False
        goodtraces = []

        # stacktrace = traceback.format_exc()
        # stacktrace = traceback.format_stack()
        stacktrace = traceback.extract_stack()

        # https://stackoverflow.com/questions/54499367/how-to-correctly-override-testcase
        for stack in stacktrace:
            filename = stack.filename

            if found and not stop and \
                    not filename.find( 'lib' ) < filename.find( 'unittest' ):
                stop = True

            if not found and filename.find( 'lib' ) < filename.find( 'unittest' ):
                found = True

            if stop and found:
                stackline = '  File "%s", line %s, in %s\n    %s' % ( 
                        stack.filename, stack.lineno, stack.name, stack.line )
                goodtraces.append( stackline )

        return goodtraces

# class DummyTestCase(unittest.TestCase):
class DummyTestCase(MultipleAssertionFailures):

    def setUp(self):
        self.maxDiff = None
        super(DummyTestCase, self).setUp()

    def tearDown(self):
        super(DummyTestCase, self).tearDown()

    def test_function_name(self):
        self.assertEqual( "var", "bar" )
        self.assertEqual( "1937", "511" )

if __name__ == '__main__':
    unittest.main()

Result output:

F
======================================================================
FAIL: test_function_name (__main__.DummyTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\User\Downloads\test.py", line 77, in tearDown
    super(DummyTestCase, self).tearDown()
  File "D:\User\Downloads\test.py", line 29, in tearDown
    self.fail( '\n\n' + "\n\n".join( errors ) )
AssertionError: 

  File "D:\User\Downloads\test.py", line 80, in test_function_name
    self.assertEqual( "var", "bar" )
AssertionError 1: 'var' != 'bar'
- var
? ^
+ bar
? ^
 : 

  File "D:\User\Downloads\test.py", line 81, in test_function_name
    self.assertEqual( "1937", "511" )
AssertionError 2: '1937' != '511'
- 1937
+ 511
 : 

More alternative solutions for the correct stacktrace capture could be posted on How to correctly override TestCase.assertEqual(), producing the right stacktrace?

Alan
  • 1,889
  • 2
  • 18
  • 30
Evandro Coan
  • 8,560
  • 11
  • 83
  • 144
0

I don't think there is a way to do this with PyUnit and wouldn't want to see PyUnit extended in this way.

I prefer to stick to one assertion per test function (or more specifically asserting one concept per test) and would rewrite test_addition() as four separate test functions. This would give more useful information on failure, viz:

.FF.
======================================================================
FAIL: test_addition_with_two_negatives (__main__.MathTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_addition.py", line 10, in test_addition_with_two_negatives
    self.assertEqual(-1 + (-1), -1)
AssertionError: -2 != -1

======================================================================
FAIL: test_addition_with_two_positives (__main__.MathTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_addition.py", line 6, in test_addition_with_two_positives
    self.assertEqual(1 + 1, 3)  # Failure!
AssertionError: 2 != 3

----------------------------------------------------------------------
Ran 4 tests in 0.000s

FAILED (failures=2)

If you decide that this approach isn't for you, you may find this answer helpful.

Update

It looks like you are testing two concepts with your updated question and I would split these into two unit tests. The first being that the parameters are being stored on the creation of a new object. This would have two assertions, one for make and one for model. If the first fails, the that clearly needs to be fixed, whether the second passes or fails is irrelevant at this juncture.

The second concept is more questionable... You're testing whether some default values are initialised. Why? It would be more useful to test these values at the point that they are actually used (and if they are not used, then why are they there?).

Both of these tests fail, and both should. When I am unit-testing, I am far more interested in failure than I am in success as that is where I need to concentrate.

FF
======================================================================
FAIL: test_creation_defaults (__main__.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_car.py", line 25, in test_creation_defaults
    self.assertEqual(self.car.wheel_count, 4)  # Failure!
AssertionError: 3 != 4

======================================================================
FAIL: test_creation_parameters (__main__.CarTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_car.py", line 20, in test_creation_parameters
    self.assertEqual(self.car.model, self.model)  # Failure!
AssertionError: 'Ford' != 'Model T'

----------------------------------------------------------------------
Ran 2 tests in 0.000s

FAILED (failures=2)
Community
  • 1
  • 1
johnsyweb
  • 136,902
  • 23
  • 188
  • 247
0

I realize this question was asked literally years ago, but there are now (at least) two Python packages that allow you to do this.

One is softest: https://pypi.org/project/softest/

The other is Python-Delayed-Assert: https://github.com/pr4bh4sh/python-delayed-assert

I haven't used either, but they look pretty similar to me.

Todd Bradley
  • 139
  • 1
  • 7
0

I think I found a solution that works. Using selenium, I was able to store a list of text values into a list. Loop through the list until I found an item that contains that text I needed. Then using the if else statement, I used a 'break' statement when the item was found and I assigned a specific value to a dummy variable once the value was found. Then I asserted that value outside of the for-loop.

    elements = self.driver.find_elements(*element)
    print(elements)
    global y
    for element in elements:
        print(element.text)
        t = element.text
        time_strip = combined_time[:-2]  #test_case specific code
        y = t.__contains__(time_strip)   #test_case specific code
        print(y)
        if y == True:
            global z 
            z = "banana"
            break
        else:
            z = "apple"
    if z == "banana":
        print(z)
        assert 2 == 2
    else:
        print(z)
        assert 2 == 1