36

I'm creating the test cases for web-tests using Jenkins, Python, Selenium2(webdriver) and Py.test frameworks.

So far I'm organizing my tests in the following structure:

each Class is the Test Case and each test_ method is a Test Step.

This setup works GREAT when everything is working fine, however when one step crashes the rest of the "Test Steps" go crazy. I'm able to contain the failure inside the Class (Test Case) with the help of teardown_class(), however I'm looking into how to improve this.

What I need is somehow skip(or xfail) the rest of the test_ methods within one class if one of them has failed, so that the rest of the test cases are not run and marked as FAILED (since that would be false positive)

Thanks!

UPDATE: I'm not looking or the answer "it's bad practice" since calling it that way is very arguable. (each Test Class is independent - and that should be enough).

UPDATE 2: Putting "if" condition in each test method is not an option - is a LOT of repeated work. What I'm looking for is (maybe) somebody knows how to use the hooks to the class methods.

codeforester
  • 39,467
  • 16
  • 112
  • 140
Alex Okrushko
  • 7,212
  • 6
  • 44
  • 63
  • Can't you just set the --maxfail flag to 1? That would make py.test end if one test fails. – Nacht Nov 13 '12 at 21:53
  • @Nacht the idea is to continue testing other test cases despite one of them has failed, but stop any test-steps in the failed test case – Alex Okrushko Nov 13 '12 at 23:41

9 Answers9

31

I like the general "test-step" idea. I'd term it as "incremental" testing and it makes most sense in functional testing scenarios IMHO.

Here is a an implementation that doesn't depend on internal details of pytest (except for the official hook extensions). Copy this into your conftest.py:

import pytest

def pytest_runtest_makereport(item, call):
    if "incremental" in item.keywords:
        if call.excinfo is not None:
            parent = item.parent
            parent._previousfailed = item

def pytest_runtest_setup(item):
    previousfailed = getattr(item.parent, "_previousfailed", None)
    if previousfailed is not None:
        pytest.xfail("previous test failed (%s)" % previousfailed.name)

If you now have a "test_step.py" like this:

import pytest

@pytest.mark.incremental
class TestUserHandling:
    def test_login(self):
        pass
    def test_modification(self):
        assert 0
    def test_deletion(self):
        pass

then running it looks like this (using -rx to report on xfail reasons):

(1)hpk@t2:~/p/pytest/doc/en/example/teststep$ py.test -rx
============================= test session starts ==============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev17
plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov, timeout
collected 3 items

test_step.py .Fx

=================================== FAILURES ===================================
______________________ TestUserHandling.test_modification ______________________

self = <test_step.TestUserHandling instance at 0x1e0d9e0>

    def test_modification(self):
>       assert 0
E       assert 0

test_step.py:8: AssertionError
=========================== short test summary info ============================
XFAIL test_step.py::TestUserHandling::()::test_deletion
  reason: previous test failed (test_modification)
================ 1 failed, 1 passed, 1 xfailed in 0.02 seconds =================

I am using "xfail" here because skips are rather for wrong environments or missing dependencies, wrong interpreter versions.

Edit: Note that neither your example nor my example would directly work with distributed testing. For this, the pytest-xdist plugin needs to grow a way to define groups/classes to be sent whole-sale to one testing slave instead of the current mode which usually sends test functions of a class to different slaves.

Aran-Fey
  • 39,665
  • 11
  • 104
  • 149
hpk42
  • 21,501
  • 4
  • 47
  • 53
  • Thanks Holger. I knew that I was probably going too deep into the py.test code with my solution. :) I'll definitely try out your suggestion. Also, I am creating the functional test cases for web UI automation (I've been telling that a lot lately, because I feel like I'm "stirring up the hornet's nest" with my "test-step" implementation) :) As for the "pytest-xdist" plugin - I haven't tried it yet, however I though it distributes the modules. In any case, I can use **Jenkins** to distribute the tests among slaves. – Alex Okrushko Sep 25 '12 at 11:56
  • Also, since each *step* modifies the state of the application even further, one general class teardown is not really an option. That's why I'm trying to incrementally grow the class "teardown". (as in here http://stackoverflow.com/questions/12538808/pytest-2-3-adding-teardowns-within-the-class/12573949#12573949) I know you've already posted the reply there, however I don't think that it answers my question. :) I posted my own reply, can you comment on it? thanks – Alex Okrushko Sep 25 '12 at 12:00
  • Holger I just came across this post because I've found myself in a position of needing this! It would be awesome if this were built into a plugin or part of pytest itself :) – James Mills Dec 18 '13 at 04:25
  • @hpk42 How does the "ordering" work with this approach? It s "undefined"? – James Mills Dec 18 '13 at 04:29
  • @JamesMills in single-process runs pytest does "source-order", i.e. tests are run in the order in which they appear in the source file. – hpk42 Dec 19 '13 at 05:36
  • 2
    @hpk42: This is good to know ;) Is this on the pytest docs? :) – James Mills Dec 20 '13 at 00:43
  • @JamesMills, yes, https://docs.pytest.org/en/latest/example/simple.html#incremental-testing-test-steps – Edvard Krol Jan 21 '19 at 09:47
12
gbonetti
  • 1,334
  • 17
  • 18
5

The pytest -x option will stop test after first failure: pytest -vs -x test_sample.py

Parth Naik
  • 186
  • 1
  • 6
3

It's generally bad practice to do what are you doing. Each test should be as independent as possible from the others, while you completely depend on the results of the other tests.

Anyway, reading the docs it seems like a feature like the one you want is not implemented.(Probably because it wasn't considered useful).

A work-around could be to "fail" your tests calling a custom method which sets some condition on the class, and mark each test with the "skipIf" decorator:

class MyTestCase(unittest.TestCase):
    skip_all = False

   @pytest.mark.skipIf("MyTestCase.skip_all")
   def test_A(self):
        ...
        if failed:
            MyTestCase.skip_all = True
  @pytest.mark.skipIf("MyTestCase.skip_all")
  def test_B(self):
      ...
      if failed:
          MyTestCase.skip_all = True

Or you can do this control before running each test and eventually call pytest.skip().

edit: Marking as xfail can be done in the same way, but using the corresponding function calls.

Probably, instead of rewriting the boiler-plate code for each test, you could write a decorator(this would probably require that your methods return a "flag" stating if they failed or not).

Anyway, I'd like to point out that,as you state, if one of these tests fails then other failing tests in the same test case should be considered false positive... but you can do this "by hand". Just check the output and spot the false positives. Even though this might be boring./error prone.

Bakuriu
  • 98,325
  • 22
  • 197
  • 231
  • Thank you for your answer. Maybe I see it differently, however I've encountered couple situations where making test full independent a) doesn't make sense: for example I test "user". Test cases are: 1) create user 2) modify user 3) delete user. In "create user" test cases then I'll have to do the "teardown" - delete user - which is another test case to begin with. Creating "Delete user" test case now doesn't make sense. However uniting these two test cases in one is not right from the "test case" point of view: maybe creating succeeds, but deletion fails. What to do? – Alex Okrushko Sep 13 '12 at 18:21
  • b) could not be done because of the "time consuming" precondition: say some test requires pc to be in a certain "state". This "state" is controlled through the web-site and to reach this state I need 2-3 hours. After that I do the rest of the tests in this pc state. In this case I cannot be putting pc in and out of this state for each test case - it will take weeks for the suite to complete. – Alex Okrushko Sep 13 '12 at 18:25
  • 2
    Most of these situations can be avoided using mocks. You don't want to depend on a "computer condition". You should just create a mock and test how you code reacts in that situation. There are some cases where this is not possible... for example if you want to test how the program runs when RAM is almost full etc. But I think this shouldn't be considered unit-testing at all. These tests control some interaction between a global state and the program. Either you can completely control the global state, or you simply can't do _unit_ tests. – Bakuriu Sep 13 '12 at 18:29
  • Probably, that's another problem: it not just for unit tests, it for automation tests for the product through UI. – Alex Okrushko Sep 13 '12 at 18:32
  • 7
    "It's generally bad practice to do what are you doing." is the worst possible response in the stackoverflow cannon of canned bad responses. As [hpk42](https://stackoverflow.com/users/137901/hpk42) notes in the [accepted answer](https://stackoverflow.com/a/12579625/2809027), intertest dependencies are a hard prerequisite for functional testing in numerous real-world scenarios – including ours, **scientific modelling.** Each sequential step of our model depends on the success of prior steps. Without intertest dependencies, our model is effectively untestable. Which is bad. – Cecil Curry May 27 '16 at 00:45
  • 1
    No, @CecilCurry, the worst possible response is the one which *doesn't* say "It's generally bad practice to do what you are doing". – jwg Sep 12 '16 at 13:26
3

You might want to have a look at pytest-dependency. It is a plugin that allows you to skip some tests if some other test had failed. In your very case, it seems that the incremental tests that gbonetti discussed is more relevant.

azmeuk
  • 4,026
  • 3
  • 37
  • 64
1

Based on hpk42's answer, here's my slightly modified incremental mark that makes test cases xfail if the previous test failed (but not if it xfailed or it was skipped). This code has to be added to conftest.py:

import pytest

try:
    pytest.skip()
except BaseException as e:
    Skipped = type(e)

try:
    pytest.xfail()
except BaseException as e:
    XFailed = type(e)

def pytest_runtest_makereport(item, call):
    if "incremental" in item.keywords:
        if call.excinfo is not None:
            if call.excinfo.type in {Skipped, XFailed}:
                return

            parent = item.parent
            parent._previousfailed = item

def pytest_runtest_setup(item):
    previousfailed = getattr(item.parent, "_previousfailed", None)
    if previousfailed is not None:
        pytest.xfail("previous test failed (%s)" % previousfailed.name)

And then a collection of test cases has to be marked with @pytest.mark.incremental:

import pytest

@pytest.mark.incremental
class TestWhatever:
    def test_a(self):  # this will pass
        pass

    def test_b(self):  # this will be skipped
        pytest.skip()

    def test_c(self):  # this will fail
        assert False

    def test_d(self):  # this will xfail because test_c failed
        pass

    def test_e(self):  # this will xfail because test_c failed
        pass
Aran-Fey
  • 39,665
  • 11
  • 104
  • 149
0

UPDATE: Please take a look at @hpk42 answer. His answer is less intrusive.

This is what I was actually looking for:

from _pytest.runner import runtestprotocol
import pytest
from _pytest.mark import MarkInfo

def check_call_report(item, nextitem):
    """
    if test method fails then mark the rest of the test methods as 'skip'
    also if any of the methods is marked as 'pytest.mark.blocker' then
    interrupt further testing
    """
    reports = runtestprotocol(item, nextitem=nextitem)
    for report in reports:
        if report.when == "call":
            if report.outcome == "failed":
                for test_method in item.parent._collected[item.parent._collected.index(item):]:
                    test_method._request.applymarker(pytest.mark.skipif("True"))
                    if test_method.keywords.has_key('blocker') and isinstance(test_method.keywords.get('blocker'), MarkInfo):
                        item.session.shouldstop = "blocker issue has failed or was marked for skipping"
            break

def pytest_runtest_protocol(item, nextitem):
# add to the hook
    item.ihook.pytest_runtest_logstart(
        nodeid=item.nodeid, location=item.location,
    )
    check_call_report(item, nextitem)
    return True

Now adding this to conftest.py or as a plugin solves my problem.
Also it's improved to STOP testing if the blocker test has failed. (meaning that the entire further tests are useless)

Alex Okrushko
  • 7,212
  • 6
  • 44
  • 63
  • I can see the usefulness of a "test-step" organization like you propose and use. The implementation is slightly hacky, though :) See my example for an improvement. – hpk42 Sep 25 '12 at 09:00
0

Or quite simply instead of calling py.test from cmd (or tox or wherever), just call:

py.test --maxfail=1

see here for more switches: https://pytest.org/latest/usage.html

theQuestionMan
  • 1,270
  • 2
  • 18
  • 29
0

To complement hpk42's answer, you can also use pytest-steps to perform incremental testing, this can help you in particular if you wish to share some kind of incremental state/intermediate results between the steps.

With this package you do not need to put all the steps in a class (you can, but it is not required), simply decorate your "test suite" function with @test_steps:

from pytest_steps import test_steps

def step_a():
    # perform this step ...
    print("step a")
    assert not False  # replace with your logic

def step_b():
    # perform this step
    print("step b")
    assert not False  # replace with your logic

@test_steps(step_a, step_b)
def test_suite_no_shared_results(test_step):
    # Execute the step
    test_step()

You can add a steps_data parameter to your test function if you wish to share a StepsDataHolder object between your steps.

import pytest
from pytest_steps import test_steps, StepsDataHolder

def step_a(steps_data):
    # perform this step ...
    print("step a")
    assert not False  # replace with your logic

    # intermediate results can be stored in steps_data
    steps_data.intermediate_a = 'some intermediate result created in step a'

def step_b(steps_data):
    # perform this step, leveraging the previous step's results
    print("step b")

    # you can leverage the results from previous steps... 
    # ... or pytest.skip if not relevant
    if len(steps_data.intermediate_a) < 5:
        pytest.skip("Step b should only be executed if the text is long enough")

    new_text = steps_data.intermediate_a + " ... augmented"  
    print(new_text)
    assert len(new_text) == 56

@test_steps(step_a, step_b)
def test_suite_with_shared_results(test_step, steps_data: StepsDataHolder):

    # Execute the step with access to the steps_data holder
    test_step(steps_data)

Finally, you can automatically skip or fail a step if another has failed using @depends_on, check in the documentation for details.

(I'm the author of this package by the way ;) )

smarie
  • 4,568
  • 24
  • 39