8

In my scenario, I have one test that writes a file, and one (but potentially many more) tests that would want to read that file. I cannot simply extract writing that file to a function/fixture, because it involves some other fixtures that inside are starting some other binary and it that binary that writes that file. So I have a fixture, that checks if the file is already there.

What I tried so far:

  • flaky and pytest-rerunfailures plugins - not suitable, as they both rerun the test instantly on failure (when the file still isn't there), and I want to append it at end of test queue.
  • manually modifying the test queue, like this:

...

request.session.items.append(request.node)
pytest.xfail("file not present yet")

this kind of works, but only when I run on single runner (without xdist, or turning it on, by passing -n0 cli arg, in my test report I see something like this:

test_load_file_before_save xfail
test_save_file PASSED        
test_load_file PASSED        
test_load_file_before_save PASSED    

when run with xdist, the xfailed test does not get repeated. Anyone know how to proceed? Some way to force xdist refresh the test list?

murison
  • 3,640
  • 2
  • 23
  • 36
  • 3
    Tests should be isolated one from other. You might want to prepare this file for other tests explicitly to be independent. In other words, if you need this file to exist, you write some contents into it, if you need this file to be absent, you delete it – Eir Nym Oct 13 '18 at 10:01
  • 2
    I am aware of that, however I have my reasons for such flow. Please focus on the actual question, which is how to alter test queue at runtime when using xdist – murison Oct 13 '18 at 15:22
  • ok, other way to solve is to specify specific order of tests to be sure that specific test will meet all requirements – Eir Nym Oct 13 '18 at 16:14
  • Yeah, but this also doesn't work with xdist. – murison Oct 13 '18 at 18:20
  • You can absolutely have fixtures require other fixtures. You can have the file writing fixture call the binary fixture that writes the file. This could result in your "file writing" test being much simpler (since most of the work is moved to the fixture) perhaps only validates the file writing. The rest of your tests that read the file will also have it from the fixture, regardless of order. – Zim Mar 03 '20 at 17:47
  • To focus on the question: your "reasons for such flow" don't matter in the face of the logic required by parallel processing. You want to guarantee a particular test execution order and yet parallel processing explicitly avoids explicit ordering in favor of speed. If you need tests to happen in predictable order, don't parallelize. If you need to parallelize, then ensure resources are available to all your tests at runtime. If you can't, or don't want, to make tests self contained, you need to move resource creation out of tests and into fixtures. Otherwise you sign up for bad test execution. – Zim Mar 03 '20 at 17:55

2 Answers2

3

You can use pytest.cache to get the test run status and append that test to the queue in case of failure.

if request.config.cache.get(request.node):
    request.session.items.append(request.node)
    pytest.xfail("file not present yet")

You can also set custom values in pytest cache to be used across different runs with request.config.cache.set(data,val).

If you are writing a file which is in the test directory, you can use --looponfail switch of pytest-xdist. It watches the directory and reruns the test until the tests pass. From documentation: distributed and subprocess testing: -f, --looponfail run tests in subprocess, wait for modified files and re-run failing test set until all pass.

Links that could be helpful: Pytest-cache

As a friendly suggestion, I would recommend you to make your tests independent of each other if you plan it to be run in parallel threads.

SilentGuy
  • 1,867
  • 17
  • 27
1

Install the package pytest-rerunfailures:

pip install pytest-rerunfailures

See the docs on how to use the rerun feature with pytest.

Rerun a specified number of times

To re-run all test failures, use the --reruns command line option with the maximum number of times you'd like the tests to run:

pytest --reruns 5

Failed fixture or setup_class will also be re-executed.

Add a delay between re-runs

To add a delay time between re-runs use the --reruns-delay command line option with the amount of seconds that you would like wait before the next test re-run is launched:

pytest --reruns 5 --reruns-delay 1
hc_dev
  • 8,389
  • 1
  • 26
  • 38
Rugwed
  • 11
  • 1
  • This will rerun the tests, but not at the end of the test run. Basically, it will rerun the failure sequentially 5 times and wait between each rerun. – Assi.NET May 17 '23 at 08:12