26

Imagine I have implemented a utility (maybe a class) called Bar in a module foo, and have written the following tests for it.

test_foo.py:

from foo import Bar as Implementation
from pytest import mark

@mark.parametrize(<args>, <test data set 1>)
def test_one(<args>):
    <do something with Implementation and args>

@mark.parametrize(<args>, <test data set 2>)
def test_two(<args>):
    <do something else with Implementation and args>

<more such tests>

Now imagine that, in the future I expect different implementations of the same interface to be written. I would like those implementations to be able to reuse the tests that were written for the above test suite: The only things that need to change are

  1. The import of the Implementation
  2. <test data set 1>, <test data set 2> etc.

So I am looking for a way to write the above tests in a reusable way, that would allow authors of new implementations of the interface to be able to use the tests by injecting the implementation and the test data into them, without having to modify the file containing the original specification of the tests.

What would be a good, idiomatic way of doing this in pytest?

====================================================================

====================================================================

Here is a unittest version that (isn't pretty but) works.

define_tests.py:

# Single, reusable definition of tests for the interface. Authors of
# new implementations of the interface merely have to provide the test
# data, as class attributes of a class which inherits
# unittest.TestCase AND this class.
class TheTests():

    def test_foo(self):
        # Faking pytest.mark.parametrize by looping
        for args, in_, out in self.test_foo_data:
            self.assertEqual(self.Implementation(*args).foo(in_),
                             out)

    def test_bar(self):
        # Faking pytest.mark.parametrize by looping
        for args, in_, out in self.test_bar_data:
            self.assertEqual(self.Implementation(*args).bar(in_),
                             out)

v1.py:

# One implementation of the interface
class Implementation:

    def __init__(self, a,b):
        self.n = a+b

    def foo(self, n):
        return self.n + n

    def bar(self, n):
        return self.n - n

v1_test.py:

# Test for one implementation of the interface
from v1 import Implementation
from define_tests import TheTests
from unittest import TestCase

# Hook into testing framework by inheriting unittest.TestCase and reuse
# the tests which *each and every* implementation of the interface must
# pass, by inheritance from define_tests.TheTests
class FooTests(TestCase, TheTests):

    Implementation = Implementation

    test_foo_data = (((1,2), 3,  6),
                     ((4,5), 6, 15))

    test_bar_data = (((1,2), 3,  0),
                     ((4,5), 6,  3))

Anybody (even a client of the library) writing another implementation of this interface

  • can reuse the set of tests defined in define_tests.py
  • inject own test data into the tests
  • without modifying any of the original files
jacg
  • 2,040
  • 1
  • 14
  • 27

4 Answers4

13

This is a great use case for parametrized test fixtures.

Your code could look something like this:

from foo import Bar, Baz

@pytest.fixture(params=[Bar, Baz])
def Implementation(request):
    return request.param

def test_one(Implementation):
    assert Implementation().frobnicate()

This would have test_one run twice: once where Implementation=Bar and once where Implementation=Baz.

Note that since Implementation is just a fixture, you can change its scope, or do more setup (maybe instantiate the class, maybe configure it somehow).

If used with the pytest.mark.parametrize decorator, pytest will generate all the permutations. For example, assuming the code above, and this code here:

@pytest.mark.parametrize('thing', [1, 2])
def test_two(Implementation, thing):
    assert Implementation(thing).foo == thing

test_two will run four times, with the following configurations:

  • Implementation=Bar, thing=1
  • Implementation=Bar, thing=2
  • Implementation=Baz, thing=1
  • Implementation=Baz, thing=2
Mirek Długosz
  • 4,205
  • 3
  • 24
  • 41
Frank T
  • 8,268
  • 8
  • 50
  • 67
  • How would this allow the authors of new implementations to add new parametrizations of the tests without touching the original files containing the test definitions? – jacg Oct 16 '14 at 15:10
  • You can move the fixture definition into a central place, a conftest.py file at a higher level. Documentation is here: http://pytest.org/latest/fixture.html#sharing-a-fixture-across-tests-in-a-module-or-class-session Do you want to keep new authors from touching certain test files in particular, or anything in the test suite? – Frank T Oct 16 '14 at 20:19
  • 2
    It should be possible to add extensions without modifying *any* existing code. If I ship a library with a few conforming implementations and their tests, clients of the library should be able to add new implementations *and their tests* without modifying *any* of the files that came with the library. This would be trivial, were it not for the fact that I want to reuse the exsting tests by injecting new data into them. Moving the fixture into a conftest.py which ships with the library, would (unless I've missed something) still require the extender to change a file that came with the library. – jacg Oct 16 '14 at 22:52
  • 1
    I've edited the question to include a working code sample demonstrating the idea in unittest. The fundamental point is: reuse of tests without modification of *any* files which previously existed. What's more, this unittest implementation would work regardless of where the the files are in the filesystem, as long as the `import define_tests` works inside the new test file; something which, I suspect, might be tricky in pytest. – jacg Oct 17 '14 at 00:02
  • 1
    @jacg did you ever find what you were looking for? I'd like to know too. – DavidC Aug 28 '16 at 03:19
  • @jacg err, why doesn't the approach you posted with `unittest` work in `pytest` too? – DavidC Aug 28 '16 at 03:57
  • @DavidC No, I didn't find what I was looking for. I guess the unittest example could be translated to pytest, but it wouldn't really be playing to pytest's strengths: it feels like struggling against it rather that using it as it was meant to be used. – jacg Aug 28 '16 at 16:08
  • @jacg you still get lots of the nice `pytest`y stuff, and it doesn't feel too much to me like fighting against `pytest`. What is it about pytest that makes this harder? Grouping tests into classes (which is necessary for this) seems to be something people sometimes do in `pytest` anyway (although much less than with `unittest` of course). – DavidC Aug 28 '16 at 22:57
  • (I'm facing this problem too. Your approach is what I'm planning to do for now.) – DavidC Aug 28 '16 at 22:57
4

You can't do it without class inheritance, but you don't have to use unittest.TestCase. To make it more pytest you can use fixtures.

It allows you for example fixture parametrizing, or use another fixures.

I try create simple example.

class SomeTest:

    @pytest.fixture
    def implementation(self):
        return "A"

    def test_a(self, implementation):
        assert "A" == implementation


class OtherTest(SomeTest):

   @pytest.fixture(params=["B", "C"])
   def implementation(self, request):
       return request.param


def test_a(self, implementation):
    """ the "implementation" fixture is not accessible out of class """ 
    assert "A" == implementation

and second test fails

    def test_a(self, implementation):
>       assert "A" == implementation
E       assert 'A' == 'B'
E         - A
E         + B

    def test_a(self, implementation):
>       assert "A" == implementation
E       assert 'A' == 'C'
E         - A
E         + C

  def test_a(implementation):
        fixture 'implementation' not found

Don't forget you have to define python_class = *Test in pytest.ini

Daniel Barton
  • 491
  • 5
  • 14
1

I did something similar to what @Daniel Barto was saying, adding additional fixtures.

Let's say you have 1 interface and 2 implementations:

class Imp1(InterfaceA):
    pass # Some implementation.
class Imp2(InterfaceA):
    pass # Some implementation.

You can indeed encapsulate testing in subclasses:

@pytest.fixture
def imp_1():
    yield Imp1()

@pytest.fixture
def imp_2():
    yield Imp2()


class InterfaceToBeTested:
    @pytest.fixture
    def imp(self):
        pass
    
    def test_x(self, imp):
        assert imp.test_x()
    
    def test_y(self, imp):
        assert imp.test_y()

class TestImp1(InterfaceToBeTested):
    @pytest.fixture
    def imp(self, imp_1):
        yield imp_1

    def test_1(self, imp):
        assert imp.test_1()

class TestImp2(InterfaceToBeTested):
    @pytest.fixture
    def imp(self, imp_2):
        yield imp_2

Note: Notice how by adding an additional derived class and overriding the fixture that returns the implementation you can run all tests on it, and that in case there are implementation-specific tests, they could be written there as well.

feran
  • 293
  • 3
  • 9
0

Conditional Plugin Based Solution

There is in fact a technique that leans on the pytest_plugins list where you can condition its value on something that transcends pytest, namely environment variables and command line arguments. Consider the following:

if os.environ["pytest_env"] == "env_a":
    pytest_plugins = [
        "projX.plugins.env_a",
    ]
elif os.environ["pytest_env"] == "env_b":
    pytest_plugins = [
        "projX.plugins.env_b",
    ]

I authored a GitHub repository to share some pytest experiments demonstrating the above techniques with commentary along the way and test run results. The relevant section to this particular question is the conditional_plugins experiment. https://github.com/jxramos/pytest_behavior

This would position you to use the same test module with two different implementations of an identically named fixture. However you'd need to invoke the test once per each implementation with the selection mechanism singling out the fixture implementation of interest. Therefore you'd need two pytest sessions to accomplish testing the two fixture variations.

In order to reuse the tests you have in place you'd need to establish a root directory higher than the project you're trying to reuse and define a conftest.py file there that does the plugin selection. That still may not be enough because the overriding behavior of the test module and any intermediate conftest files if you leave the directory structure as is. But if you're free to reshuffle files and leave them unchanged, you just need to get the existing conftest file out of the line of the path from the test module to the root directory and rename it so it can be detected as a plugin instead.

Configuration / Command line Selection of Plugins

Pytest actually has a -p command line option where you can list multiple plugins back to back to specify the plugin files. You can learn more of that control by looking in the ini_plugin_selection experiment in the pytest_behavior repo.

Parametrization over Fixture Values

As of this writing this is a work in progress for core pytest functionality but there is a third party plugin pytest-cases which supports the notion where a fixture itself can be used as a parameter to a test case. With that capability you can parametrize over multiple fixtures for the same test case, where each fixture is backed by each API implementation. This sounds like the ideal solution to your use case, however you would still need to decorate the existing test module with new source to permit this parametrization over fixtures which may not be permissible by you.

Take a look at this rich discussion in an open pytest issue #349 Using fixtures in pytest.mark.parametrize, specifically this comment. He links to a concrete example he wrote up that demonstrates the new fixture parametrization syntax.

Commentary

I get the sense that the test fixture hierarchy one can build above a test module all the way up to the execution's root directory is something more oriented towards fixture reuse but not so much test module reuse. If you think about it you can write several fixtures way up in a common subfolder where a bunch of test modules branch out potentially landing deep down in a number of child subdirectories. Each of those test modules would have access to fixtures defined in that parent conftest.py, but without doing extra work they only get one definition per fixture across all those intermediate conftest.py files even if the same name is reused across that hierarchy. The fixture is chosen closest to the test module through the pytest fixture overriding mechanism, but the resolving stops at the test module and does not go past it to any folders beneath the test module where variation might be found. Essentially there's only one path from the test module to the root dir which limits the fixture definitions to one. This gives us a one fixture to many test modules relationship.

jxramos
  • 7,356
  • 6
  • 57
  • 105