3

I have a situation where I can do a very fast validation check on an entire object, and if that check passes the object is guaranteed to be healthy. If it fails, I need to identify the aspects that are problematic using time intensive checks.

I was hoping for something like: "@pytest.mark.dependency(depends=["test_a"])" except instead of only running on success it would only run on a failure.

DiscOH
  • 177
  • 1
  • 7
  • I don't think `pytest` supports something like this. Is there an issue with just using an `if` statement and running another test if your assertion fails? – Matt Messersmith Nov 29 '18 at 18:28
  • I'm mostly asking in search of best practices. – DiscOH Nov 29 '18 at 18:30
  • Can you explain your "intensive checks" more detailed? You would usually run the failing test alone with a debugger on or similar. – Klaus D. Nov 29 '18 at 18:31
  • 1
    I've seen something similar to this, and as I recall we just used an `if` statement. I think this case comes up extraordinarily rarely, so I think a one off `if` is apropos. If you find it happens a lot in your API, you could write a decorator that works like you hope – Matt Messersmith Nov 29 '18 at 18:34
  • @KlausD. This has come up for database validations (checking the hash of a table is easy, checking the hash of each row / column is costly) and for image validations (comparing a single screenshot vs comparing each element cropped and isolated). – DiscOH Nov 29 '18 at 18:34
  • You are looking for [`pytest-dependency`](https://pytest-dependency.readthedocs.io/en/latest/usage.html). – hoefling Nov 29 '18 at 20:36
  • @hoefling I actually took a look at that before posting this, it only works for the opposite case (run test b only if test a succeeds), right? If not, could you provide an example of a "run only if test a fails"? – DiscOH Nov 29 '18 at 20:38
  • 1
    You're right, my bad! Didn't read your question through enough. Let me dig deeper, then. – hoefling Nov 29 '18 at 20:42

1 Answers1

3

As you pointed out correctly, pytest-dependency is unable to handle your case because it skips tests on failure and not on success. However, with some customizing of this plugin, you can get the desired result. Example:

# conftest.py

import pytest
from pytest_dependency import DependencyManager


def pytest_collection_modifyitems(session, config, items):
    modules = (item.getparent(pytest.Module) for item in items)
    marked_modules = {m for m in modules if m.get_closest_marker('depend_on_failures')}
    for module in marked_modules:
        module.dependencyManager = FailureDepManager()


class FailureDepManager(DependencyManager):

    def checkDepend(self, depends, item):
        for i in depends:
            if i in self.results:
                if self.results[i].isSuccess():
                    pytest.skip('%s depends on failures in %s' % (item.name, i))
                    break

FailureDepManager is the custom version of pytest-dependency's DependencyManager that will skip dependent tests only when the dependency succeeds (has result passed or XPASS). Sadly, this behaviour can only be triggered on a per-module basis as this a current limitation of the plugin (see this question for more details on that). Example usage:

import pytest

pytestmark = pytest.mark.depend_on_failures


@pytest.mark.dependency()
@pytest.mark.xfail(reason='simulate failing test')
def test_foo():
    assert False


@pytest.mark.dependency(depends=['test_foo'])
def test_bar():
    assert True

Due to the mark depend_on_failures on module level, test_bar will now run if test_foo fails:

================================== test session starts ==================================
platform linux -- Python 3.7.0, pytest-4.0.1, py-1.7.0, pluggy-0.8.0
...
plugins: dependency-0.3.2
collected 2 items                                                                           

test_spam.py::test_foo xfail
test_spam.py::test_bar PASSED

========================== 1 passed, 1 xfailed in 0.08 seconds ==========================
hoefling
  • 59,418
  • 12
  • 147
  • 194