365

I have some kind of test data and want to create a unit test for each item. My first idea was to do it like this:

import unittest

l = [["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"]]

class TestSequence(unittest.TestCase):
    def testsample(self):
        for name, a,b in l:
            print "test", name
            self.assertEqual(a,b)

if __name__ == '__main__':
    unittest.main()

The downside of this is that it handles all data in one test. I would like to generate one test for each item on the fly. Any suggestions?

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Peter Hoffmann
  • 56,376
  • 15
  • 76
  • 59
  • possible duplicate of [Python unittest: Generate multiple tests programmatically?](http://stackoverflow.com/questions/2798956/python-unittest-generate-multiple-tests-programmatically) – Nakilon Apr 13 '13 at 22:44
  • 2
    A good link that may provide an answer: http://eli.thegreenplace.net/2014/04/02/dynamically-generating-python-test-cases – gaborous Jul 29 '15 at 19:10

25 Answers25

289

This is called "parametrization".

There are several tools that support this approach. E.g.:

The resulting code looks like this:

from parameterized import parameterized

class TestSequence(unittest.TestCase):
    @parameterized.expand([
        ["foo", "a", "a",],
        ["bar", "a", "b"],
        ["lee", "b", "b"],
    ])
    def test_sequence(self, name, a, b):
        self.assertEqual(a,b)

Which will generate the tests:

test_sequence_0_foo (__main__.TestSequence) ... ok
test_sequence_1_bar (__main__.TestSequence) ... FAIL
test_sequence_2_lee (__main__.TestSequence) ... ok

======================================================================
FAIL: test_sequence_1_bar (__main__.TestSequence)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/parameterized/parameterized.py", line 233, in <lambda>
    standalone_func = lambda *a: func(*(a + p.args), **p.kwargs)
  File "x.py", line 12, in test_sequence
    self.assertEqual(a,b)
AssertionError: 'a' != 'b'

For historical reasons I'll leave the original answer circa 2008):

I use something like this:

import unittest

l = [["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"]]

class TestSequense(unittest.TestCase):
    pass

def test_generator(a, b):
    def test(self):
        self.assertEqual(a,b)
    return test

if __name__ == '__main__':
    for t in l:
        test_name = 'test_%s' % t[0]
        test = test_generator(t[1], t[2])
        setattr(TestSequense, test_name, test)
    unittest.main()
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Dmitry Mukhin
  • 6,649
  • 3
  • 29
  • 31
  • 27
    Actually, bignose, this code DOES generate a different name for each test (it actually wouldn't work otherwise). In the example given the tests executed will be named "test_foo", "test_bar", and "test_lee" respectively. Thus the benefit you mention (and it is a big one) is preserved as long as you generate sensible names. – Toji Feb 23 '11 at 16:52
  • 2
    As the answer given by @codeape states, nose handles this. However, nose does not seem to handle Unicode; therefore for me this is a preferable solution. +1 – Keith Pinson Aug 13 '11 at 20:42
  • As the answer by @codeape says below, nose has support for this. However, it does not seem to support Unicode very well. Therefore, this answerable is preferable to those who are running tests with Unicode data. +1 – Keith Pinson Aug 13 '11 at 20:45
  • There is a problem, that when you can't do `python myfile.py TestSequense.test_foo` - you'll get `ValueError: no such test method in : test_generator`, so you can only do `python myfile.py TestSequense` – Nakilon Apr 12 '13 at 10:31
  • 5
    So note, that more proper answer is given in the *duplicate* question: http://stackoverflow.com/a/2799009/322020 - you have use to `.__name__ =` to enable *`.exact_method`* testing – Nakilon Apr 12 '13 at 10:38
  • 8
    Why does the code modifying the class appear in the `if __name__ == '__main__'` conditional? Surely it should go outside this to run at import time (remembering that python modules are only imported once even if imported from several different places) – SpoonMeiser Dec 21 '13 at 00:52
  • @SpoonMeiser, this was just a sample, of course there are tons of ways to improve things. thanks for headsup – Dmitry Mukhin Dec 21 '13 at 15:38
  • 4
    I don't think this is a good solution. The code of a unittest should not depend on the way it gets called. The TestCase should be useable in nose or pytest or a different test environment. – guettli Apr 29 '14 at 12:21
  • @mojo - How do you run just one test when you use nose for parameterized tests ? I use pycharm IDE. When I right click and run a parameterized nose test, I get the error `Error Traceback (most recent call last): File "C:\Python3\lib\unittest\case.py", line 384, in _executeTestPart function() TypeError: 'NoneType' object is not callable`. So, I have to run all tests just to check one test. How do I fix this ? – Erran Morad Oct 10 '16 at 01:56
  • @BoratSagdiyev, i've answered the question 8 years ago :) I'm afraid I'm no longer an expert on this. Furthermore, the `nose_parameterized` bit was added by someone else. Sorry! – Dmitry Mukhin Oct 10 '16 at 10:31
  • 2
    My impression is that this was a great answer 10 years ago. But it got upvoted so much because it is the oldest answer, not the best answer. Nowadays, the way to go is probably py.test's parametrize decorator. (Metaclasses are ugly and do not work with py.test; many single-use packages are not longer developed or supported.) – Jérémie Jun 17 '19 at 15:44
  • @Jérémie I agree, do you want me to update the answer? – Dmitry Mukhin Jun 21 '19 at 14:25
  • I've wrapped it up in a static function which I added to my TestCase definition called addTest(cls, id, func_name, params, description). It works well, but the testse must be added in global scope (not under "if __name__ == "__main__" so test "runners" may load the module and find the tests as well. eg: myclass.addTest(1, "test_equal", ("a", "b"), "Test values a and b are equal") The next step will be to build a loop around that and take the data from a list. Not sure why unittest does not have this feature already. – brookbot Apr 23 '20 at 22:09
  • @BrookeWallace this was a demo code for an answer posted in 2008. now there are better tools for this, including something in unittest, according to other high ranked answers. – Dmitry Mukhin Apr 27 '20 at 13:48
  • This works great! @DmitryMukhin do you know if there's any way to use fixtures instead of the string values of a and b? – user613 Mar 09 '22 at 12:34
255

Using unittest (since 3.4)

Since Python 3.4, the standard library unittest package has the subTest context manager.

See the documentation:

Example:

from unittest import TestCase

param_list = [('a', 'a'), ('a', 'b'), ('b', 'b')]

class TestDemonstrateSubtest(TestCase):
    def test_works_as_expected(self):
        for p1, p2 in param_list:
            with self.subTest(p1, p2):
                self.assertEqual(p1, p2)

You can also specify a custom message and parameter values to subTest():

with self.subTest(msg="Checking if p1 equals p2", p1=p1, p2=p2):

Using nose

The nose testing framework supports this.

Example (the code below is the entire contents of the file containing the test):

param_list = [('a', 'a'), ('a', 'b'), ('b', 'b')]

def test_generator():
    for params in param_list:
        yield check_em, params[0], params[1]

def check_em(a, b):
    assert a == b

The output of the nosetests command:

> nosetests -v
testgen.test_generator('a', 'a') ... ok
testgen.test_generator('a', 'b') ... FAIL
testgen.test_generator('b', 'b') ... ok

======================================================================
FAIL: testgen.test_generator('a', 'b')
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.5/site-packages/nose-0.10.1-py2.5.egg/nose/case.py", line 203, in runTest
    self.test(*self.arg)
  File "testgen.py", line 7, in check_em
    assert a == b
AssertionError

----------------------------------------------------------------------
Ran 3 tests in 0.006s

FAILED (failures=1)
Michael Mior
  • 28,107
  • 9
  • 89
  • 113
codeape
  • 97,830
  • 24
  • 159
  • 188
  • But be aware, 'setup()' will not know what variables are being used as arguments to yield. Actually setup() won't know what test is running, or vars set inside test_generator(). This complicates sanity checking within setup(), and it's one of the reasons that some folks prefer py.test. – Scott Prive Jan 29 '16 at 21:30
  • This works! but then you can not use unittest.TestCase and its assert* method families. – Shiplu Mokaddim Mar 27 '18 at 13:15
  • 1
    Actually, you can. I had a look at the unittest docs and this was added in 3.4 (I've updated the answer with a code example). – codeape Apr 04 '18 at 13:21
  • 2
    Is there a way to run the unittest version with pytest, so that it would run all the cases and not stop on the first failed parameter? – kakk11 Apr 24 '19 at 17:13
  • 3
    As mentioned by @kakk11, this answer (and subTest in general) does not work with pytest. This is a known issue. There is an actively developed plugin to make this work: https://github.com/pytest-dev/pytest-subtests – Jérémie Jun 17 '19 at 15:47
  • 6
    there are two issues: 1) first failed subtest causes that subsequent subtests are not run 2) `setUp()` and `tearDown()` are not called for subtests – HiFile.app - best file manager Sep 07 '19 at 18:30
  • How is this differs from the same code without `self.subTest()`? I don't see any report about in which subtest error occurred. Is it by design? – Dims Feb 24 '23 at 22:53
  • Still are not separate tests, in the console you will see a single test. – G M Jul 26 '23 at 08:18
92

This can be solved elegantly using Metaclasses:

import unittest

l = [["foo", "a", "a",], ["bar", "a", "b"], ["lee", "b", "b"]]

class TestSequenceMeta(type):
    def __new__(mcs, name, bases, dict):

        def gen_test(a, b):
            def test(self):
                self.assertEqual(a, b)
            return test

        for tname, a, b in l:
            test_name = "test_%s" % tname
            dict[test_name] = gen_test(a,b)
        return type.__new__(mcs, name, bases, dict)

class TestSequence(unittest.TestCase):
    __metaclass__ = TestSequenceMeta

if __name__ == '__main__':
    unittest.main()
Guy
  • 1,984
  • 1
  • 16
  • 19
  • very nice, ran into various issues with setattr, this just works! – OriginalCliche Sep 17 '14 at 12:24
  • 2
    This worked GREAT for me with Selenium. As a note, in the class TestSequence, you can define "static" methods like setUp(self), is_element_present(self, how, what), ... tearDown(self). Putting them AFTER the "__metaclass__ = TestSequenceMeta" statement seems to work. – Love and peace - Joe Codeswell Sep 08 '15 at 23:37
  • 6
    This solution is better than the one selected as accepted IMHO. – petroslamb Jan 18 '16 at 13:42
  • 1
    Can you perhaps clarify why this does not work when just overriding `__new__` from `TestSequence` class (and thus skip the metaclass altogether)? – petroslamb Jan 18 '16 at 13:50
  • 2
    @petroslamb The `__new__` method in the metaclass gets called when the class itself is defined, not when the first instance is created. I would imagine this method of dynamically creating test methods is more compatible with the introspection used by `unittest` to determine how many tests are in a class (i.e. it may compile the list of tests before it ever creates an instance of that class). – BillyBBone Feb 23 '16 at 04:20
  • This works way better for me than the accepted answer: Test discovery didn't work properly with the other answer, but does with this one. – Stuart Axon Apr 29 '16 at 20:38
  • 20
    Note: in python 3, change this to: `class TestSequence(unittest.TestCase, metaclass=TestSequenceMeta):[...]` – Mathieu_Du Dec 30 '16 at 18:40
  • 1
    ... and to support both Python 2 and 3, see http://python-future.org/compatible_idioms.html#metaclasses – Thibaud Colas Feb 16 '17 at 12:13
  • It's useful to note that this only works when defining `__new__`, and not `__init__`. This works much better than the accepted answer IMO because it will work even when the unittest is imported or added to a test suite. – cangers Nov 15 '18 at 18:58
  • 10
    Could you please use `dct` instead of `dict`? Using keywords as variable names is confusing and error-prone. – npfoss Nov 26 '18 at 04:26
  • Worked very well for the situation where I had some regular expressions, and knew what the match.groupdict should be for each positive/ negative case. I didn't end up needing it, but it is good to know that you can pass arbitrary keyword parameters to the metaclass `__new__` function. – Nathan Chappell Jun 02 '20 at 09:22
65

As of Python 3.4, subtests have been introduced to unittest for this purpose. See the documentation for details. TestCase.subTest is a context manager which allows one to isolate asserts in a test so that a failure will be reported with parameter information, but it does not stop the test execution. Here's the example from the documentation:

class NumbersTest(unittest.TestCase):

def test_even(self):
    """
    Test that numbers between 0 and 5 are all even.
    """
    for i in range(0, 6):
        with self.subTest(i=i):
            self.assertEqual(i % 2, 0)

The output of a test run would be:

======================================================================
FAIL: test_even (__main__.NumbersTest) (i=1)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "subtests.py", line 32, in test_even
    self.assertEqual(i % 2, 0)
AssertionError: 1 != 0

======================================================================
FAIL: test_even (__main__.NumbersTest) (i=3)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "subtests.py", line 32, in test_even
    self.assertEqual(i % 2, 0)
AssertionError: 1 != 0

======================================================================
FAIL: test_even (__main__.NumbersTest) (i=5)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "subtests.py", line 32, in test_even
    self.assertEqual(i % 2, 0)
AssertionError: 1 != 0

This is also part of unittest2, so it is available for earlier versions of Python.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Bernhard
  • 8,583
  • 4
  • 41
  • 42
44

load_tests is a little known mechanism introduced in 2.7 to dynamically create a TestSuite. With it, you can easily create parametrized tests.

For example:

import unittest

class GeneralTestCase(unittest.TestCase):
    def __init__(self, methodName, param1=None, param2=None):
        super(GeneralTestCase, self).__init__(methodName)

        self.param1 = param1
        self.param2 = param2

    def runTest(self):
        pass  # Test that depends on param 1 and 2.


def load_tests(loader, tests, pattern):
    test_cases = unittest.TestSuite()
    for p1, p2 in [(1, 2), (3, 4)]:
        test_cases.addTest(GeneralTestCase('runTest', p1, p2))
    return test_cases

That code will run all the TestCases in the TestSuite returned by load_tests. No other tests are automatically run by the discovery mechanism.

Alternatively, you can also use inheritance as shown in this ticket: http://bugs.python.org/msg151444

Javier
  • 2,752
  • 15
  • 30
  • 1
    The code above fails: TypeError: __init__() takes at most 2 arguments (4 given) – max Nov 17 '15 at 05:31
  • 2
    Added null defaults to the constructor extra parameters. – Javier Nov 24 '15 at 01:56
  • I prefer the nose-parameterize code in [@mojo's answer](http://stackoverflow.com/a/32939/527489), but for my clients it's just too useful to avoid an extra dependency so I'll be using this for them. – sage Mar 19 '16 at 15:55
  • 1
    This solution was my favourite on this page. Both [Nose](https://nose.readthedocs.io/), suggested in the current top answer, and its fork [Nose2](http://nose2.readthedocs.io/) are maintenance only, and the latter suggests users instead try [pytest](http://pytest.readthedocs.io/). What a mess - I'll stick to a native approach like this! – Sean Jan 31 '18 at 22:00
  • The only disadvantage I found is that 3rd party test runners some times don't support this mechanism as they didn't know about it. – Javier Feb 01 '18 at 14:58
  • 1
    bonus: ability to redefine shortDescription method for output passed in params – fun_vit Feb 14 '18 at 10:30
  • You can also do `for p in ` and `*p` while initializing; it avoids initializing each item and then passing on! – Nishant Mar 21 '21 at 11:44
40

It can be done by using pytest. Just write the file test_me.py with content:

import pytest

@pytest.mark.parametrize('name, left, right', [['foo', 'a', 'a'],
                                               ['bar', 'a', 'b'],
                                               ['baz', 'b', 'b']])
def test_me(name, left, right):
    assert left == right, name

And run your test with command py.test --tb=short test_me.py. Then the output will look like:

=========================== test session starts ============================
platform darwin -- Python 2.7.6 -- py-1.4.23 -- pytest-2.6.1
collected 3 items

test_me.py .F.

================================= FAILURES =================================
_____________________________ test_me[bar-a-b] _____________________________
test_me.py:8: in test_me
    assert left == right, name
E   AssertionError: bar
==================== 1 failed, 2 passed in 0.01 seconds ====================

It is simple! Also pytest has more features like fixtures, mark, assert, etc.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Sergei Voronezhskii
  • 2,184
  • 19
  • 32
  • 1
    I was looking for a simple, straight forward example how to parametrize test cases with py.test. Thank you very much! – timgeb Mar 25 '16 at 15:13
  • @timgeb I'm glad to help you. Check [py.test](http://stackoverflow.com/questions/tagged/py.test) tag, for more examples. Also I suggest to use [hamcrest](https://github.com/hamcrest/PyHamcrest) for adding some sugar into your asserts with human readable mutchers, which can be modified, combined or created by your own way. Plus we have [allure-python](https://github.com/allure-framework/allure-python), a nice looking report generation for `py.test` – Sergei Voronezhskii Mar 25 '16 at 15:55
  • Thanks. I just started moving from `unittest` to py.test. I used to have `TestCase` base classes that were able to dynamically create children with different arguments which they would store as class variables... which was a bit unwieldy. – timgeb Mar 25 '16 at 15:57
  • 1
    @timgeb Yep you are right. The most *killer feature* of `py.test` is [yield_fixtures](https://pytest.org/latest/yieldfixture.html). Which can do *setup*, return some useful data into test and after test ends make *teardown*. Fixtures can also be [parametirized](https://pytest.org/latest/example/parametrize.html#apply-indirect-on-particular-arguments). – Sergei Voronezhskii Mar 25 '16 at 16:08
15

Use the ddt library. It adds simple decorators for the test methods:

import unittest
from ddt import ddt, data
from mycode import larger_than_two

@ddt
class FooTestCase(unittest.TestCase):

    @data(3, 4, 12, 23)
    def test_larger_than_two(self, value):
        self.assertTrue(larger_than_two(value))

    @data(1, -3, 2, 0)
    def test_not_larger_than_two(self, value):
        self.assertFalse(larger_than_two(value))

This library can be installed with pip. It doesn't require nose, and works excellent with the standard library unittest module.

akaihola
  • 26,309
  • 7
  • 59
  • 69
6

There's also Hypothesis which adds fuzz or property based testing.

This is a very powerful testing method.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Javier
  • 2,752
  • 15
  • 30
6

This is effectively the same as parameterized as mentioned in a previous answer, but specific to unittest:

def sub_test(param_list):
    """Decorates a test case to run it as a set of subtests."""

    def decorator(f):

        @functools.wraps(f)
        def wrapped(self):
            for param in param_list:
                with self.subTest(**param):
                    f(self, **param)

        return wrapped

    return decorator

Example usage:

class TestStuff(unittest.TestCase):
    @sub_test([
        dict(arg1='a', arg2='b'),
        dict(arg1='x', arg2='y'),
    ])
    def test_stuff(self, arg1, arg2):
        ...
Eric Cousineau
  • 1,944
  • 14
  • 23
6

You would benefit from trying the TestScenarios library.

testscenarios provides clean dependency injection for python unittest style tests. This can be used for interface testing (testing many implementations via a single test suite) or for classic dependency injection (provide tests with dependencies externally to the test code itself, allowing easy testing in different situations).

bignose
  • 30,281
  • 14
  • 77
  • 110
4

You can use the nose-ittr plugin (pip install nose-ittr).

It's very easy to integrate with existing tests, and minimal changes (if any) are required. It also supports the nose multiprocessing plugin.

Note that you can also have a customize setup function per test.

@ittr(number=[1, 2, 3, 4])
def test_even(self):
    assert_equal(self.number % 2, 0)

It is also possible to pass nosetest parameters like with their built-in plugin attrib. This way you can run only a specific test with specific parameter:

nosetest -a number=2
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Maroun
  • 94,125
  • 30
  • 188
  • 241
4

I use metaclasses and decorators for generate tests. You can check my implementation python_wrap_cases. This library doesn't require any test frameworks.

Your example:

import unittest
from python_wrap_cases import wrap_case


@wrap_case
class TestSequence(unittest.TestCase):

    @wrap_case("foo", "a", "a")
    @wrap_case("bar", "a", "b")
    @wrap_case("lee", "b", "b")
    def testsample(self, name, a, b):
        print "test", name
        self.assertEqual(a, b)

Console output:

testsample_u'bar'_u'a'_u'b' (tests.example.test_stackoverflow.TestSequence) ... test bar
FAIL
testsample_u'foo'_u'a'_u'a' (tests.example.test_stackoverflow.TestSequence) ... test foo
ok
testsample_u'lee'_u'b'_u'b' (tests.example.test_stackoverflow.TestSequence) ... test lee
ok

Also you may use generators. For example this code generate all possible combinations of tests with arguments a__list and b__list

import unittest
from python_wrap_cases import wrap_case


@wrap_case
class TestSequence(unittest.TestCase):

    @wrap_case(a__list=["a", "b"], b__list=["a", "b"])
    def testsample(self, a, b):
        self.assertEqual(a, b)

Console output:

testsample_a(u'a')_b(u'a') (tests.example.test_stackoverflow.TestSequence) ... ok
testsample_a(u'a')_b(u'b') (tests.example.test_stackoverflow.TestSequence) ... FAIL
testsample_a(u'b')_b(u'a') (tests.example.test_stackoverflow.TestSequence) ... FAIL
testsample_a(u'b')_b(u'b') (tests.example.test_stackoverflow.TestSequence) ... ok
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Kirill Ermolov
  • 780
  • 6
  • 19
2

I came across ParamUnittest the other day when looking at the source code for radon (example usage on the GitHub repository). It should work with other frameworks that extend TestCase (like Nose).

Here is an example:

import unittest
import paramunittest


@paramunittest.parametrized(
    ('1', '2'),
    #(4, 3),    <---- Uncomment to have a failing test
    ('2', '3'),
    (('4', ), {'b': '5'}),
    ((), {'a': 5, 'b': 6}),
    {'a': 5, 'b': 6},
)
class TestBar(TestCase):
    def setParameters(self, a, b):
        self.a = a
        self.b = b

    def testLess(self):
        self.assertLess(self.a, self.b)
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Matt
  • 14,353
  • 5
  • 53
  • 65
2
import unittest

def generator(test_class, a, b):
    def test(self):
        self.assertEqual(a, b)
    return test

def add_test_methods(test_class):
    # The first element of list is variable "a", then variable "b", then name of test case that will be used as suffix.
    test_list = [[2,3, 'one'], [5,5, 'two'], [0,0, 'three']]
    for case in test_list:
        test = generator(test_class, case[0], case[1])
        setattr(test_class, "test_%s" % case[2], test)


class TestAuto(unittest.TestCase):
    def setUp(self):
        print 'Setup'
        pass

    def tearDown(self):
        print 'TearDown'
        pass

_add_test_methods(TestAuto)  # It's better to start with underscore so it is not detected as a test itself

if __name__ == '__main__':
    unittest.main(verbosity=1)

RESULT:

>>>
Setup
FTearDown
Setup
TearDown
.Setup
TearDown
.
======================================================================
FAIL: test_one (__main__.TestAuto)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:/inchowar/Desktop/PyTrash/test_auto_3.py", line 5, in test
    self.assertEqual(a, b)
AssertionError: 2 != 3

----------------------------------------------------------------------
Ran 3 tests in 0.019s

FAILED (failures=1)
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Arindam Roychowdhury
  • 5,927
  • 5
  • 55
  • 63
  • 1
    Minor problem with your `def add_test_methods` function. Should be `def _add_test_methods ` I think – Raychaser Jan 13 '17 at 19:12
  • @Raychaser...You are correct..I fixed that but did not update it here....Thanks for catching that. – Arindam Roychowdhury Jan 16 '17 at 10:45
  • @ArindamRoychowdhury Is there any way to add the test methods from within the TestAuto class, say in the setup method. If I call the add_test_methods method from either setup method, the dynamic tests do not get added to the TestAuto class – Sennin May 05 '23 at 17:06
1

Just use metaclasses, as seen here;

class DocTestMeta(type):
    """
    Test functions are generated in metaclass due to the way some
    test loaders work. For example, setupClass() won't get called
    unless there are other existing test methods, and will also
    prevent unit test loader logic being called before the test
    methods have been defined.
    """
    def __init__(self, name, bases, attrs):
        super(DocTestMeta, self).__init__(name, bases, attrs)

    def __new__(cls, name, bases, attrs):
        def func(self):
            """Inner test method goes here"""
            self.assertTrue(1)

        func.__name__ = 'test_sample'
        attrs[func.__name__] = func
        return super(DocTestMeta, cls).__new__(cls, name, bases, attrs)

class ExampleTestCase(TestCase):
    """Our example test case, with no methods defined"""
    __metaclass__ = DocTestMeta

Output:

test_sample (ExampleTestCase) ... OK
SleepyCal
  • 5,739
  • 5
  • 33
  • 47
1

You can use TestSuite and custom TestCase classes.

import unittest

class CustomTest(unittest.TestCase):
    def __init__(self, name, a, b):
        super().__init__()
        self.name = name
        self.a = a
        self.b = b

    def runTest(self):
        print("test", self.name)
        self.assertEqual(self.a, self.b)

if __name__ == '__main__':
    suite = unittest.TestSuite()
    suite.addTest(CustomTest("Foo", 1337, 1337))
    suite.addTest(CustomTest("Bar", 0xDEAD, 0xC0DE))
    unittest.TextTestRunner().run(suite)
Dirk
  • 9,381
  • 17
  • 70
  • 98
Max Malysh
  • 29,384
  • 19
  • 111
  • 115
1

I'd been having trouble with a very particular style of parameterized tests. All our Selenium tests can run locally, but they also should be able to be run remotely against several platforms on SauceLabs. Basically, I wanted to take a large amount of already-written test cases and parameterize them with the fewest changes to code possible. Furthermore, I needed to be able to pass the parameters into the setUp method, something which I haven't seen any solutions for elsewhere.

Here's what I've come up with:

import inspect
import types

test_platforms = [
    {'browserName': "internet explorer", 'platform': "Windows 7", 'version': "10.0"},
    {'browserName': "internet explorer", 'platform': "Windows 7", 'version': "11.0"},
    {'browserName': "firefox", 'platform': "Linux", 'version': "43.0"},
]


def sauce_labs():
    def wrapper(cls):
        return test_on_platforms(cls)
    return wrapper


def test_on_platforms(base_class):
    for name, function in inspect.getmembers(base_class, inspect.isfunction):
        if name.startswith('test_'):
            for platform in test_platforms:
                new_name = '_'.join(list([name, ''.join(platform['browserName'].title().split()), platform['version']]))
                new_function = types.FunctionType(function.__code__, function.__globals__, new_name,
                                                  function.__defaults__, function.__closure__)
                setattr(new_function, 'platform', platform)
                setattr(base_class, new_name, new_function)
            delattr(base_class, name)

    return base_class

With this, all I had to do was add a simple decorator @sauce_labs() to each regular old TestCase, and now when running them, they're wrapped up and rewritten, so that all the test methods are parameterized and renamed. LoginTests.test_login(self) runs as LoginTests.test_login_internet_explorer_10.0(self), LoginTests.test_login_internet_explorer_11.0(self), and LoginTests.test_login_firefox_43.0(self), and each one has the parameter self.platform to decide what browser/platform to run against, even in LoginTests.setUp, which is crucial for my task since that's where the connection to SauceLabs is initialized.

Anyway, I hope this might be of help to someone looking to do a similar "global" parameterization of their tests!

1

This solution works with unittest and nose for Python 2 and Python 3:

#!/usr/bin/env python
import unittest

def make_function(description, a, b):
    def ghost(self):
        self.assertEqual(a, b, description)
    print(description)
    ghost.__name__ = 'test_{0}'.format(description)
    return ghost


class TestsContainer(unittest.TestCase):
    pass

testsmap = {
    'foo': [1, 1],
    'bar': [1, 2],
    'baz': [5, 5]}

def generator():
    for name, params in testsmap.iteritems():
        test_func = make_function(name, params[0], params[1])
        setattr(TestsContainer, 'test_{0}'.format(name), test_func)

generator()

if __name__ == '__main__':
    unittest.main()
Guillaume Jacquenot
  • 11,217
  • 6
  • 43
  • 49
mop
  • 281
  • 2
  • 7
1

Meta-programming is fun, but it can get in the way. Most solutions here make it difficult to:

  • selectively launch a test
  • point back to the code given the test's name

So, my first suggestion is to follow the simple/explicit path (works with any test runner):

import unittest

class TestSequence(unittest.TestCase):

    def _test_complex_property(self, a, b):
        self.assertEqual(a,b)

    def test_foo(self):
        self._test_complex_property("a", "a")
    def test_bar(self):
        self._test_complex_property("a", "b")
    def test_lee(self):
        self._test_complex_property("b", "b")

if __name__ == '__main__':
    unittest.main()

Since we shouldn't repeat ourselves, my second suggestion builds on Javier's answer: embrace property based testing. Hypothesis library:

  • is "more relentlessly devious about test case generation than us mere humans"

  • will provide simple count-examples

  • works with any test runner

  • has many more interesting features (statistics, additional test output, ...)

    class TestSequence(unittest.TestCase):

      @given(st.text(), st.text())
      def test_complex_property(self, a, b):
          self.assertEqual(a,b)
    

To test your specific examples, just add:

    @example("a", "a")
    @example("a", "b")
    @example("b", "b")

To run only one particular example, you can comment out the other examples (provided example will be run first). You may want to use @given(st.nothing()). Another option is to replace the whole block by:

    @given(st.just("a"), st.just("b"))

OK, you don't have distinct test names. But maybe you just need:

  • a descriptive name of the property under test.
  • which input leads to failure (falsifying example).

Funnier example

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
YvesgereY
  • 3,778
  • 1
  • 20
  • 19
1

I have found that this works well for my purposes, especially if I need to generate tests that do slightly difference processes on a collection of data.

import unittest

def rename(newName):
    def renamingFunc(func):
        func.__name__ == newName
        return func
    return renamingFunc

class TestGenerator(unittest.TestCase):

    TEST_DATA = {}

    @classmethod
    def generateTests(cls):
        for dataName, dataValue in TestGenerator.TEST_DATA:
            for func in cls.getTests(dataName, dataValue):
                setattr(cls, "test_{:s}_{:s}".format(func.__name__, dataName), func)

    @classmethod
    def getTests(cls):
        raise(NotImplementedError("This must be implemented"))

class TestCluster(TestGenerator):

    TEST_CASES = []

    @staticmethod
    def getTests(dataName, dataValue):

        def makeTest(case):

            @rename("{:s}".format(case["name"]))
            def test(self):
                # Do things with self, case, data
                pass

            return test

        return [makeTest(c) for c in TestCluster.TEST_CASES]

TestCluster.generateTests()

The TestGenerator class can be used to spawn different sets of test cases like TestCluster.

TestCluster can be thought of as an implementation of the TestGenerator interface.

bcdan
  • 1,438
  • 1
  • 12
  • 25
0

The metaclass-based answers still work in Python 3, but instead of the __metaclass__ attribute, one has to use the metaclass parameter, as in:

class ExampleTestCase(TestCase,metaclass=DocTestMeta):
    pass
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Patrick Ohly
  • 712
  • 6
  • 8
0

I had trouble making these work for setUpClass.

Here's a version of Javier's answer that gives setUpClass access to dynamically allocated attributes.

import unittest


class GeneralTestCase(unittest.TestCase):
    @classmethod
    def setUpClass(cls):
        print ''
        print cls.p1
        print cls.p2

    def runTest1(self):
        self.assertTrue((self.p2 - self.p1) == 1)

    def runTest2(self):
        self.assertFalse((self.p2 - self.p1) == 2)


def load_tests(loader, tests, pattern):
    test_cases = unittest.TestSuite()
    for p1, p2 in [(1, 2), (3, 4)]:
        clsname = 'TestCase_{}_{}'.format(p1, p2)
        dct = {
            'p1': p1,
            'p2': p2,
        }
        cls = type(clsname, (GeneralTestCase,), dct)
        test_cases.addTest(cls('runTest1'))
        test_cases.addTest(cls('runTest2'))
    return test_cases

Outputs

1
2
..
3
4
..
----------------------------------------------------------------------
Ran 4 tests in 0.000s

OK
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
hhquark
  • 379
  • 3
  • 10
-1

Besides using setattr, we can use load_tests with Python 3.2 and later.

class Test(unittest.TestCase):
    pass

def _test(self, file_name):
    open(file_name, 'r') as f:
        self.assertEqual('test result',f.read())

def _generate_test(file_name):
    def test(self):
        _test(self, file_name)
    return test

def _generate_tests():
    for file in files:
        file_name = os.path.splitext(os.path.basename(file))[0]
        setattr(Test, 'test_%s' % file_name, _generate_test(file))

test_cases = (Test,)

def load_tests(loader, tests, pattern):
    _generate_tests()
    suite = TestSuite()
    for test_class in test_cases:
        tests = loader.loadTestsFromTestCase(test_class)
        suite.addTests(tests)
    return suite

if __name__ == '__main__':
    _generate_tests()
    unittest.main()
pptime
  • 21
  • 6
  • The link is broken (DNS?): *"Hmm. We’re having trouble finding that site. We can’t connect to the server at blog.livreuro.com."* – Peter Mortensen Jan 29 '21 at 22:17
-1

Following is my solution. I find this useful when:

  1. Should work for unittest.Testcase and unittest discover

  2. Have a set of tests to be run for different parameter settings.

  3. Very simple and no dependency on other packages

     import unittest
    
     class BaseClass(unittest.TestCase):
         def setUp(self):
             self.param = 2
             self.base = 2
    
         def test_me(self):
             self.assertGreaterEqual(5, self.param+self.base)
    
         def test_me_too(self):
             self.assertLessEqual(3, self.param+self.base)
    
    
      class Child_One(BaseClass):
         def setUp(self):
             BaseClass.setUp(self)
             self.param = 4
    
    
      class Child_Two(BaseClass):
         def setUp(self):
             BaseClass.setUp(self)
             self.param = 1
    
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
S.Arora
  • 263
  • 3
  • 7
-1
import unittest

def generator(test_class, a, b,c,d,name):
    def test(self):
        print('Testexecution=',name)
        print('a=',a)
        print('b=',b)
        print('c=',c)
        print('d=',d)

    return test

def add_test_methods(test_class):
    test_list = [[3,3,5,6, 'one'], [5,5,8,9, 'two'], [0,0,5,6, 'three'],[0,0,2,3,'Four']]
    for case in test_list:
        print('case=',case[0], case[1],case[2],case[3],case[4])
        test = generator(test_class, case[0], case[1],case[2],case[3],case[4])
        setattr(test_class, "test_%s" % case[4], test)


class TestAuto(unittest.TestCase):
    def setUp(self):
        print ('Setup')
        pass

    def tearDown(self):
        print ('TearDown')
        pass

add_test_methods(TestAuto)

if __name__ == '__main__':
    unittest.main(verbosity=1)
The Godfather
  • 4,235
  • 4
  • 39
  • 61
thangaraj1980
  • 141
  • 2
  • 11