5

Most test frameworks assume that "1 test = 1 Python method/function", and consider a test as passed when the function executes without raising assertions.

I'm testing a compiler-like program (a program that reads *.foo files and process their contents), for which I want to execute the same test on many input (*.foo) files. IOW, my test looks like:

class Test(unittest.TestCase):
    def one_file(self, filename):
        # do the actual test

    def list_testcases(self):
        # essentially os.listdir('tests/') and filter *.foo files.

    def test_all(self):
        for f in self.list_testcases():
            one_file(f)

My current code uses unittest from Python's standard library, i.e. one_file uses self.assert...(...) statements to check whether the test passes.

This works, in the sense that I do get a program which succeeds/fails when my code is OK/buggy, but I'm loosing a lot of the advantages of the testing framework:

  • I don't get relevant reporting like "X failures out of Y tests" nor the list of passed/failed tests. (I'm planning to use such system not only to test my own development but also to grade student's code as a teacher, so reporting is important for me)

  • I don't get test independence. The second test runs on the environment left by the first, and so on. The first failure stops the testsuite: testcases coming after a failure are not ran at all.

  • I get the feeling that I'm abusing my test framework: there's only one test function so automatic test discovery of unittest sounds overkill for example. The same code could (should?) be written in plain Python with a basic assert.

An obvious alternative is to change my code to something like

class Test(unittest.TestCase):
    def one_file(self, filename):
        # do the actual test

    def test_file1(self):
        one_file("first-testcase.foo")

    def test_file2(self):
        one_file("second-testcase.foo")

Then I get all the advantages of unittest back, but:

  • It's a lot more code to write.

  • It's easy to "forget" a testcase, i.e. create a test file in tests/ and forget to add it to the Python test.

I can imagine a solution where I would generate one method per testcase dynamically (along the lines of setattr(self, 'test_file' + str(n), ...)), to generate the code for the second solution without having to write it by hand. But that sounds really overkill for a use-case which doesn't seem so complex.

How could I get the best of both, i.e. automatic testcase discovery (list tests/*.foo files), test independence and proper reporting?

Martijn Pieters
  • 1,048,767
  • 296
  • 4,058
  • 3,343
Matthieu Moy
  • 15,151
  • 5
  • 38
  • 65
  • You can take look at http://pythonhosted.org/behave/. It contains great parametrizations capabilities. – Laszlowaty Aug 18 '17 at 08:08
  • Thanks for the hint, but I don't see how this would solve my problem (I may very well have missed something though ...). Essentially, behave would allow me to write natural-language instead of Python, and [`Scenario Outlines`](http://pythonhosted.org/behave/tutorial.html#scenario-outlines) would allow factoring code (a bit like my `one_file` function above) but a testsuite would still need to list all test files explicitly, right? – Matthieu Moy Aug 18 '17 at 08:36
  • Regarding behave: also, in this context I prefer writing directly Python code than natural language. Anyway, thanks again for the suggestion, I'm looking for food for thoughts as much as I'm looking for a real solution ;-). – Matthieu Moy Aug 18 '17 at 08:38

2 Answers2

4

If you can use pytest as your test runner, then this is actually pretty straightforward using the parametrize decorator:

import pytest, glob

all_files = glob.glob('some/path/*.foo')

@pytest.mark.parametrize('filename', all_files)
def test_one_file(filename):
    # do the actual test

This will also automatically name the tests in a useful way, so that you can see which files have failed:

$ py.test
================================== test session starts ===================================
platform darwin -- Python 3.6.1, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
[...]
======================================== FAILURES ========================================
_____________________________ test_one_file[some/path/a.foo] _____________________________

filename = 'some/path/a.foo'

    @pytest.mark.parametrize('filename', all_files)
    def test_one_file(filename):
>      assert False
E      assert False

test_it.py:7: AssertionError
_____________________________ test_one_file[some/path/b.foo] _____________________________

filename = 'some/path/b.foo'

    @pytest.mark.parametrize('filename', all_files)
    def test_one_file(filename):
[...]
Aaron V
  • 6,596
  • 5
  • 28
  • 31
  • 1
    Excellent, thanks. Essentially, the keyword I was missing was "parameterized" (well, in my case a dynamic parameterization), and googling with it I find https://stackoverflow.com/questions/32899/how-to-generate-dynamic-parametrized-unit-tests-in-python which is essentially the same question. – Matthieu Moy Aug 23 '17 at 11:10
0

Here is a solution, although it might be considered not very beautiful... The idea is to dynamically create new functions, add them to the test class, and use the function names as arguments (e.g., filenames):

# import
import unittest

# test class
class Test(unittest.TestCase):

    # example test case
    def test_default(self):
        print('test_default')
        self.assertEqual(2,2)

# set string for creating new function    
func_string="""def test(cls):

        # get function name and use it to pass information
        filename = inspect.stack()[0][3]

        # print function name for demonstration purposes
        print(filename)

        # dummy test for demonstration purposes
        cls.assertEqual(type(filename),str)"""

# add new test for each item in list
for f in ['test_bla','test_blu','test_bli']:

    # set name of new function
    name=func_string.replace('test',f)

    # create new function
    exec(name)

    # add new function to test class
    setattr(Test, f, eval(f))

if __name__ == "__main__":
    unittest.main()

This correctly runs all four tests and returns:

> test_bla
> test_bli
> test_blu
> test_default
> Ran 4 tests in 0.040s
> OK
David
  • 1,909
  • 1
  • 20
  • 32
  • Thanks for your answer. This is actually what I meant by "I can imagine a solution where I would generate one method per testcase dynamically ...". Works, but seems really overkill to me. Freddie's answer about `parametrize` seems to just do the right thing to me. – Matthieu Moy Aug 23 '17 at 09:49