57

I am looking for a way to run all of the assertions in my unit tests in PyTest, even if some of them fail. I know there must be a simple way to do this. I checked the CLI options and looked through this site for similar questions/answers but didn't see anything. Sorry if this has already been answered.

For example, consider the following code snippet, with PyTest code alongside it:

def parrot(i):
    return i

def test_parrot():
    assert parrot(0) == 0
    assert parrot(1) == 1
    assert parrot(2) == 1
    assert parrot(2) == 2

By default, the execution stops at the first failure:

$ python -m pytest fail_me.py 
=================== test session starts ===================
platform linux2 -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1
rootdir: /home/npsrt/Documents/repo/codewars, inifile: 
collected 1 items 

fail_me.py F

=================== FAILURES ===================
___________________ test_parrot ___________________

    def test_parrot():
        assert parrot(0) == 0
        assert parrot(1) == 1
>       assert parrot(2) == 1
E       assert 2 == 1
E        +  where 2 = parrot(2)

fail_me.py:7: AssertionError
=================== 1 failed in 0.05 seconds ===================

What I'd like to do is to have the code continue to execute even after PyTest encounters the first failure.

Jeff Wright
  • 1,014
  • 1
  • 11
  • 17
  • 1
    See also [this question](https://stackoverflow.com/q/4732827/102441) for `unittest`, which is linked to by a bunch of very similar questions – Eric Dec 29 '17 at 09:39
  • 1
    `@pytest.mark.parametrize` is what you're looking for. it takes 2 arguments, the variable name that will be providing the data, and the data you wish to supply to the test. So, to achieve what you want, the following can be done. @pytest.mark.parametrize('parrot_num', (1, 2, 3, 4, 5)) def parrot(parrot_num): return parrot_num def test_parrot(): assert parrot(0) == 0 assert parrot(1) == 1 assert parrot(2) == 1 assert parrot(2) == 2 – Arthur Bowers Oct 21 '21 at 14:25

4 Answers4

25

It ran all of your tests. You only wrote one test, and that test ran!

If you want nonfatal assertions, where a test will keep going if an assertion fails (like Google Test's EXPECT macros), try pytest-expect, which provides that functionality. Here's the example their site gives:

def test_func(expect):
    expect('a' == 'b')
    expect(1 != 1)
    a = 1
    b = 2
    expect(a == b, 'a:%s b:%s' % (a,b))

You can see that expectation failures don't stop the test, and all failed expectations get reported:

$ python -m pytest test_expect.py
================ test session starts =================
platform darwin -- Python 2.7.9 -- py-1.4.26 -- pytest-2.7.0
rootdir: /Users/okken/example, inifile: 
plugins: expect
collected 1 items 

test_expect.py F

====================== FAILURES ======================
_____________________ test_func ______________________
>    expect('a' == 'b')
test_expect.py:2
--------
>    expect(1 != 1)
test_expect.py:3
--------
>    expect(a == b, 'a:%s b:%s' % (a,b))
a:1 b:2
test_expect.py:6
--------
Failed Expectations:3
============== 1 failed in 0.01 seconds ==============
user2357112
  • 260,549
  • 28
  • 431
  • 505
  • Aha! That answers my question. I was simply running one test, which had multiple assertions within it. I'll check out the pytest-expect module as well. – Jeff Wright Apr 20 '16 at 19:31
  • 4
    Note that [development on `pytest-expect`](https://github.com/okken/pytest-expect) has gone rather stale – Eric Dec 29 '17 at 09:40
  • 2
    See below for an answer about `pytest-check` which is a rewrite of `pytest-expect` – nealmcb Aug 06 '20 at 15:31
25

As others already mentioned, you'd ideally write multiple tests and only have one assertion in each (that's not a hard limit, but a good guideline).

The @pytest.mark.parametrize decorator makes this easy:

import pytest

def parrot(i):
    return i

@pytest.mark.parametrize('inp, expected', [(0, 0), (1, 1), (2, 1), (2, 2)])
def test_parrot(inp, expected):
    assert parrot(inp) == expected

When running it with -v:

parrot.py::test_parrot[0-0] PASSED
parrot.py::test_parrot[1-1] PASSED
parrot.py::test_parrot[2-1] FAILED
parrot.py::test_parrot[2-2] PASSED

=================================== FAILURES ===================================
_______________________________ test_parrot[2-1] _______________________________

inp = 2, expected = 1

    @pytest.mark.parametrize('inp, expected', [(0, 0), (1, 1), (2, 1), (2, 2)])
    def test_parrot(inp, expected):
>       assert parrot(inp) == expected
E       assert 2 == 1
E        +  where 2 = parrot(2)

parrot.py:8: AssertionError
====================== 1 failed, 3 passed in 0.01 seconds ======================
The Compiler
  • 11,126
  • 4
  • 40
  • 54
19

The pytest plugin pytest-check is a rewrite of pytest-expect (which was recommended here previously but has gone stale). It will let you do a "soft" assert like so:

An example from the GitHub repo:

import pytest_check as check

def test_example():
    a = 1
    b = 2
    c = [2, 4, 6]
    check.greater(a, b)
    check.less_equal(b, a)
    check.is_in(a, c, "Is 1 in the list")
    check.is_not_in(b, c, "make sure 2 isn't in list")
ChrisGS
  • 293
  • 2
  • 10
9

You should be able to control this with the --maxfail argument. I believe the default is to not stop for failures, so I'd check any py.test config files you might have for a place that's overriding it.

Daenyth
  • 35,856
  • 13
  • 85
  • 124
  • Thanks for your quick reply. (please see my updated original question above for for info about my environment.) Unfortunately, it doesn't seem to work for me. PyTest gives me the same output when I invoke --maxfail as when I run without it. My new command line is: python -m pytest --maxfail=5 fail_me.py – Jeff Wright Apr 20 '16 at 18:32
  • 7
    `--maxfail` determines how many _tests_ to fail, not how many `assert`ions – Eric Dec 29 '17 at 09:36
  • 1
    The default is indeed to not stop. Note that an alias for `--maxfail=1` is `--exitfirst` or `-x`, which can also show up `pytest` file like: `addopts = -x` – nealmcb Aug 06 '20 at 15:43
  • In addition to what Eric says, in Python, a false assertion will raise an AssertionError and therefore exit the current scope. No amount of configuration can prevent that. Well, in fact, you can disable the assertion entirely, but it won't test anything anymore in that case. – Romain Vincent Apr 20 '23 at 17:31