4
@pytest.mark.incremental
class Test_aws():

    def test_case1(self):
        ----- some code here ----
        result = someMethodTogetResult
        assert result[0] == True
        orderID = result[1]

    def test_case2(self):
        result = someMethodTogetResult # can be only perform once test case 1 run successfully.
        assert result == True

    def test_deleteOrder_R53HostZonePrivate(self):
        result = someMethodTogetResult
        assert result[0] == True

The current behavior is if test 1 passes then test 2 runs and if test 2 passes then test 3 runs.

What I need is: If test_case 3 should be run if test_case 1 passed. test_case 2 should not change any behavior. Any thoughts here?

Dinesh Saini
  • 2,917
  • 2
  • 30
  • 48
  • Perhaps it is too simple, but why not move `test_case2() `just to a separate class? Tests are meant to be easily readable and convoluted logic makes tham harder to understand/reproduce. – Evgeny May 29 '18 at 12:30
  • Also these two test classes can ingerit from the same base if you want to share `someMethodTogetResult` method. – Evgeny May 29 '18 at 12:32
  • Test case 2 can't be executed until one is performed – Dinesh Saini May 29 '18 at 12:33
  • @EvgenyPogrebnyak some third party api from aws – Dinesh Saini May 29 '18 at 12:34
  • Makes a good case for unbundling the test cases - it is a bad practice for tests not to be able to run in isolation - perhaps make a fixture that makes an independent input for testcase_2 – Evgeny May 29 '18 at 12:35
  • Here we have some test case that depend on third party actual api not just mock. Say test case 1 create a EC2 instance on AWS, test case 2 Stop that instance and test case 3 delete the instance. So even if test case 2 failed we can delete inctance without stop it. – Dinesh Saini May 29 '18 at 12:42
  • 1
    Now it is more clear what you are doing, I can see you intent of saving time of creating a new EC2 instance for each test. It probably makes a good separate question, eg Share same setup for integration test, I'd love to see a better approach for it myself. – Evgeny May 29 '18 at 12:56

3 Answers3

8

I guess you are looking for pytest-dependency which allows setting conditional run dependencies between tests. Example:

import random
import pytest


class TestAWS:

    @pytest.mark.dependency
    def test_instance_start(self):
        assert random.choice((True, False))

    @pytest.mark.dependency(depends=['TestAWS::test_instance_start'])
    def test_instance_stop(self):
        assert random.choice((True, False))

    @pytest.mark.dependency(depends=['TestAWS::test_instance_start'])
    def test_instance_delete(self):
        assert random.choice((True, False))

test_instance_stop and test_instance_delete will run only if test_instance_start succeeds and skip otherwise. However, since test_instance_delete does not depend on test_instance_stop, the former will execute no matter what the result of the latter test is. Run the example test class several times to verify the desired behaviour.

hoefling
  • 59,418
  • 12
  • 147
  • 194
2

To complement hoefling's answer, another option is to use pytest-steps to perform incremental testing. This can help you in particular if you wish to share some kind of incremental state/intermediate results between the steps.

However it does not implement advanced dependency mechanisms like pytest-dependency, so use the package that better suits your goal.

With pytest-steps, hoefling's example would write:

import random
from pytest_steps import test_steps, depends_on

def step_instance_start():
    assert random.choice((True, False))

@depends_on(step_instance_start)
def step_instance_stop():
    assert random.choice((True, False))

@depends_on(step_instance_start)
def step_instance_delete():
    assert random.choice((True, False))

@test_steps(step_instance_start, step_instance_stop, step_instance_delete)
def test_suite(test_step):
    # Execute the step
    test_step()

EDIT: there is a new 'generator' mode to make it even easier:

import random
from pytest_steps import test_steps, optional_step

@test_steps('step_instance_start', 'step_instance_stop', 'step_instance_delete')
def test_suite():
    # First step (Start)
    assert random.choice((True, False))
    yield

    # Second step (Stop)
    with optional_step('step_instance_stop') as stop_step:
        assert random.choice((True, False))
    yield stop_step

    # Third step (Delete)
    with optional_step('step_instance_delete') as delete_step:
        assert random.choice((True, False))
    yield delete_step

Check the documentation for details. (I'm the author of this package by the way ;) )

smarie
  • 4,568
  • 24
  • 39
1

You can use pytest-ordering package to order your tests using pytest mark. The author of the package explains the usage here

Example:

@pytest.mark.first
def test_first():
    pass

@pytest.mark.second
def test_2():
    pass


@pytest.mark.order5
def test_5():
    pass
SilentGuy
  • 1,867
  • 17
  • 27