EDIT: this is an old answer using a legacy version of pytest-cases. Please look at this new answer instead
pytest-cases
offers two ways to solve this problem
@cases_data
, a decorator that you can use on your test function or fixture so that it sources its parameters from various "case functions", possibly in various modules, and possibly themselve parametrized. The problem is that "case functions" are not fixtures, and therefore do not allow you to benefit from the dependencies and setup/teardown mechanism. I use it rather to collect various cases from file system.
more recent but more 'pytest-y', fixture_union
allows you to create a fixture that is the union of two or more fixtures. This includes setup/teardown and dependencies, so it is what you would prefer here. You can create a union either explicitly or by using pytest_parametrize_plus
with fixture_ref()
in the parameter values.
Here is how your example would look:
import pytest
from pytest_cases import pytest_parametrize_plus, pytest_fixture_plus, fixture_ref
# ------ Dataset A
DA = ['data1_a', 'data2_a', 'data3_a']
DA_data_indices = list(range(len(DA)))
@pytest_fixture_plus(scope="module")
def datasetA():
print("setting up dataset A")
yield DA
print("tearing down dataset A")
@pytest_fixture_plus(scope="module")
@pytest.mark.parametrize('data_index', DA_data_indices, ids="idx={}".format)
def data_from_datasetA(datasetA, data_index):
return datasetA[data_index]
# ------ Dataset B
DB = ['data1_b', 'data2_b']
DB_data_indices = list(range(len(DB)))
@pytest_fixture_plus(scope="module")
def datasetB():
print("setting up dataset B")
yield DB
print("tearing down dataset B")
@pytest_fixture_plus(scope="module")
@pytest.mark.parametrize('data_index', range(len(DB)), ids="idx={}".format)
def data_from_datasetB(datasetB, data_index):
return datasetB[data_index]
# ------ Test
@pytest_parametrize_plus('data', [fixture_ref('data_from_datasetA'),
fixture_ref('data_from_datasetB')])
def test_databases(data):
# do test
print(data)
Of course, you may wish to handle any number of datasets dynamically. In that case you have to generate all alternative fixtures dynamically, because pytest has to know in advance what is the number of tests to execute. This works quite well:
import pytest
from makefun import with_signature
from pytest_cases import pytest_parametrize_plus, pytest_fixture_plus, fixture_ref
# ------ Datasets
datasets = {
'DA': ['data1_a', 'data2_a', 'data3_a'],
'DB': ['data1_b', 'data2_b']
}
datasets_indices = {dn: range(len(dc)) for dn, dc in datasets.items()}
# ------ Datasets fixture generation
def create_dataset_fixture(dataset_name):
@pytest_fixture_plus(scope="module", name=dataset_name)
def dataset():
print("setting up dataset %s" % dataset_name)
yield datasets[dataset_name]
print("tearing down dataset %s" % dataset_name)
return dataset
def create_data_from_dataset_fixture(dataset_name):
@pytest_fixture_plus(name="data_from_%s" % dataset_name, scope="module")
@pytest.mark.parametrize('data_index', dataset_indices, ids="idx={}".format)
@with_signature("(%s, data_index)" % dataset_name)
def data_from_dataset(data_index, **kwargs):
dataset = kwargs.popitem()[1]
return dataset[data_index]
return data_from_dataset
for dataset_name, dataset_indices in datasets_indices.items():
globals()[dataset_name] = create_dataset_fixture(dataset_name)
globals()["data_from_%s" % dataset_name] = create_data_from_dataset_fixture(dataset_name)
# ------ Test
@pytest_parametrize_plus('data', [fixture_ref('data_from_%s' % n)
for n in datasets_indices.keys()])
def test_databases(data):
# do test
print(data)
Both provide the same output:
setting up dataset DA
data1_a
data2_a
data3_a
tearing down dataset DA
setting up dataset DB
data1_b
data2_b
tearing down dataset DB
EDIT: there might be a simpler solution if the setup/teardown procedure is the same for all datasets, with using param_fixtures
. I'll try to post that soon.
EDIT 2: actually the simpler solution I was referring to seems to lead to multiple setup/teardown as you already noted in the accepted answer:
from pytest_cases import pytest_fixture_plus, param_fixtures
# ------ Datasets
datasets = {
'DA': ['data1_a', 'data2_a', 'data3_a'],
'DB': ['data1_b', 'data2_b']
}
was_setup = {
'DA': False,
'DB': False
}
data_indices = {_dataset_name: list(range(len(_dataset_contents)))
for _dataset_name, _dataset_contents in datasets.items()}
param_fixtures("dataset_name, data_index", [(_dataset_name, _data_idx) for _dataset_name in datasets
for _data_idx in data_indices[_dataset_name]],
scope='module')
@pytest_fixture_plus(scope="module")
def dataset(dataset_name):
print("setting up dataset %s" % dataset_name)
assert not was_setup[dataset_name]
was_setup[dataset_name] = True
yield datasets[dataset_name]
print("tearing down dataset %s" % dataset_name)
@pytest_fixture_plus(scope="module")
def data(dataset, data_index):
return dataset[data_index]
# ------ Test
def test_databases(data):
# do test
print(data)
I opened a ticket on pytest-dev to better understand why: pytest-dev#5457
See documentation for details. (I'm the author by the way) )