-1

Currently I have a scenario where there is a resource intensive task occurring in each test. What I want to do is make this task execute after all tests have generated and pause between executions. Then finally assert a condition. For example currently I have the task (let's say foo) running. This is currently executed as:

foo(var):
  SendAPIrequest_to_store(val) #--->> this takes a long time

test_A():
  var = generated data
  foo(var)
  assert generated data present in server

test_B():
  var = generated value
  foo(var)
  assert generated data present in server

....

test_X():
  var = generated value
  foo(var)
  assert generated data present in server



This server can, let's say add multiple data at once - hence instead of running the whole task individually, I am looking for a way to achieve the following behaviour:

add_data(x):
  val = val + "," + x

foo(var):
  # this function will be run after all the test data has been added with all tests currently being suspended till generation
  SendAPIrequest_to_store(val) #----> this takes a long time
  # now we will continue the execution for the tests from where it was paused in sequence
  # so assert data for test_A then continue assertion for test_B and so-on



test_A():
  var = generated data
  add_data(var)
  PAUSE and move to the next test
  assert generated data present in server

test_B():
  var = generated value
  add_data(var)
  PAUSE and move to the next test
  assert generated data present in server

....

test_X():
  var = generated value
  add_data(var)
  PAUSE and move to the next test
  assert generated data present in server

Any help or pointers will be greatly appreciated. Thank you in advance

devabhishekpal
  • 95
  • 2
  • 14
  • It is possible to run tests in parallel using pytest using [pytest-xdist](https://pypi.org/project/pytest-xdist/) or [pytest-parallel](https://pypi.org/project/pytest-parallel/). Maybe that will get you what you want? – larsks Apr 10 '23 at 14:46

2 Answers2

2

It is actually possible to run a time-consuming function once and after specific actions were completed in test (such as generating data). But this solution must be considered as a workaround and not a proper implementation (see the notice at the end).

The first step is to define two fixtures on the module level, that will share methods and data across tests from the same python file.

@pytest.fixture(scope="module")
def post_asserts() -> List[Callable[[], None]]:
    return []


@pytest.fixture(scope="module")
def data(post_asserts: List[Callable[[], None]]) -> Generator[List[str], None, None]:
    d = [] # create a container for aggregating data in tests
    
    yield d
    
    foo(d) # time consuming step, that is run once.

    # running all post assert statements from test in the module
    for pa in post_asserts:
        pa()

Namely:

  • the fixture post_asserts is a list pointing toward function that will be defined as inner function at test ends. Each test will be responsible for adding its post assert function (a function mainly composed by assert statement).
  • the fixture data is a list, acting as a container for aggregating data across tests. The container is yielded, allowing to run code when the fixture wraps up (at the end of the module, as the fixture is module scoped, see this post). Here, it is used to run all post assert methods.

Each tests that uses the foo function must add their data and post assert method to both fixture containers. For instance:

def test_A(post_asserts: List[Callable[[], None]], data: List[str]):
    data.append(any_generated_data_A)

    def test_A_post_asserts():
        assert True

    post_asserts.append(test_A_post_asserts)


def test_B(post_asserts: List[Callable[[], None]], data: List[str]):
    data.append(any_generated_data_B)

    def test_B_post_asserts():
        assert True

    post_asserts.append(test_B_post_asserts)


def test_X(post_asserts: List[Callable[[], None]], data: List[str]):
    data.append(any_generated_data_X)

    def test_X_post_asserts():
        assert False

    post_asserts.append(test_X_post_asserts)

When running tests, each test will add its data to the shared data container and the shared post_asserts container. When wrapping up the data fixture, the foo function will be called with aggregated content in the data container and each post assert method will be run. This wrapping-up process is done once.

An important notice is that asserting after tests run seems not standard. Thus, pytest will not detect that the test has failed, but that an error occurred in the final stage of the "pipeline". As the post assert method is defined in the test body, you will be able to detect which test has failed. However, the first failing post assert will prevent subsequent post assert from running. A quick workaround is to use a try/except as such (using traceback to print stack, see this post):

@pytest.fixture(scope="module")
def data(post_asserts: List[Callable[[], None]]) -> Generator[List[str], None, None]:
    d = []
    yield d
    foo(d)

    exc_container: List[str] = []
    for pa in post_asserts:
        try:
            pa()
        except:
            exc_container.append(traceback.format_exc())

    if len(exc_container) > 0:
        raise AssertionError([str(exc) for exc in exc_container])

ftorre
  • 501
  • 1
  • 9
1

Your tests are at least functions, so you can call them manually, for example:

import pytest

def test_a():
    a = 1
    test_b()
    assert 1 == 1

def test_b():
    a = 2
    assert 2 == 1

However, I would not have your approach. Having tests depending on each others is not ideal. If I understand correctly your problem, instead of collecting tests, using this "mecanism" to set a variable, and then resume all tests, you could just have a step that executes before all tests. This step will set the content of whatever needs to be set, and then the tests will be executed and the order will not matter.

To achieve that, you could use fixtures with autouse and session

For example:

import pytest

@pytest.fixture(scope="session")
def value():
    return []

# this will be executed at first for the test session
@pytest.fixture(autouse=True, scope="session")
def first(value):
    value.append(1)
    value.append(2)
    return value

def test_a(value):
    assert value[0] == 1

def test_b(value):
    assert value[1] == 2
Itération 122442
  • 2,644
  • 2
  • 27
  • 73
  • Hello, I think there might have been a misunderstanding. The values are not directly generated, each value is being randomly generated for each test and it's parametrization, hence the final values are only known in runtime. Do you think we can yield from the test, or use the test as kind of generators – devabhishekpal Apr 24 '23 at 11:19
  • 1
    Sorry for the error @devabhishekpal. You should provide a [minimal reproductible example](https://stackoverflow.com/help/minimal-reproducible-example) with what you get so far and what yo uexpect, which would make it way easier for us to understand what you want and how to help you. – Itération 122442 Apr 24 '23 at 11:26
  • 1
    This approach using fixtures looks good to me, and there should be a way to make it work using randomly generated values. If you require further parameterization i would look into the approach described here: https://stackoverflow.com/questions/42676681/pytest-and-yield-based-tests – austinfromboston Apr 25 '23 at 18:40