1

I need to organize my test cases because I have a large test suite. I can't see to get a test in one Python class to be skipped if a test it depends on in another Python class fails.

Here is my basic setup:

class TestWorkflow1:

    @staticmethod
    @pytest.mark.dependency()
    def test_create_something():
        //do some stuff    

class TestNegativeWorkflowClone1:

    @staticmethod
    @pytest.mark.dependency('TestWorkflow1::test_create_something')
    def test_try_to_clone_something():
        //do some stuff 

TestNegativeWorkflowClone1 runs before TestWorkflow1. I have tried what was suggested in an answer to this ticket: Dependencies between files with pytest-dependency?

from pytest_dependency import DependencyManager

class TestWorkflow1:
    DependencyManager.ScopeCls['module'] = DependencyManager.ScopeCls['session']

    @staticmethod
    @pytest.mark.dependency()
    def test_create_something():
        //do some stuff    

from pytest_dependency import DependencyManager

class TestNegativeWorkflowClone1:
    DependencyManager.ScopeCls['module'] = DependencyManager.ScopeCls['session']

    @staticmethod
    @pytest.mark.dependency('TestWorkflow1::test_create_something')
    def test_try_to_clone_something():
        //do some stuff 

That didn't work either. TestNegativeWorkflowClone1 still runs before TestWorkflow1.

I tried using the filename in the dependency decoration in TestNegativeWorkflowClone1

class TestNegativeWorkflowClone1:
    DependencyManager.ScopeCls['module'] = DependencyManager.ScopeCls['session']

    @staticmethod
    @pytest.mark.dependency('TestWorkflow1.py::test_create_something')
    def test_try_to_clone_something():
        //do some stuff 

Still didn't work. TestNegativeWorkflowClone1 still runs first.

MrBean Bremen
  • 14,916
  • 3
  • 26
  • 46
Selena
  • 2,208
  • 8
  • 29
  • 49
  • Tests should run independent of each other, in every order, alone and in any random group. If they don't the application will not survive real world use. Make sure every single test consists of the four crucial steps: setup (create all test givens), run (run the code you want to test), assertions (check if the result is as expected), tear down (remove all changes from the other steps). – Klaus D. Dec 01 '18 at 16:49
  • @KlausD. I understand the 'ideal' philosophy around making tests independent. The issue is that I have to automate end to end workflows. That's the requirement I have in front of me. So I have incremental tests for checkpointing different things along this E2E workflow. Keeping all tests isolated an independent of one another is great for unit tests. But I am not doing that. I'm doing complex integration testing based on full workflows through an application. – Selena Dec 01 '18 at 16:56
  • Your issue is declaring the dependency wrong, you're missing the `depends` keyword. However, don't expect the scope monkeypatching to be a solution in any manner - I clearly stated in the answer you referenced that it's not a solution, just a demonstration of what's hidden in the library at the moment. – hoefling Dec 01 '18 at 17:00
  • @hoefling Thank you. I know it had to be something stupid like that. So, this now results in a skip. If I trigger that negative test case to run, however, it does not force the test case it depends on to run. Is there any way to make that happen with the current state of features in pytest-dependency? – Selena Dec 01 '18 at 17:04
  • Try using the PR mentioned in the answer, however, I'm not sure whether it will work with static methods in test classes, but worth a try. – hoefling Dec 01 '18 at 17:10
  • @hoefling Thanks. I was hoping to avoid using unmerged code for this. I'm looking into pytest-ordering and using it to force that test case to run after the one it depends on. – Selena Dec 01 '18 at 17:12
  • Yes, unfortunately `pytest-dependency` is not that mature and doesn't support anything besides the simple use cases. AFAIK `pytest-ordering` doesn't offer test dependencies, it just runs them in a given order, so tests aren't skipped on failure. – hoefling Dec 01 '18 at 17:18
  • @hoefling ugh. pytest-ordering is even less mature than pytest-dependency. It only supports ordering via assigning a test case a run number. That's messy. I need to rethink my strategy and I'm starting to see the value of KlausD's statement about not making test cases depend on each other like that. Maybe duplicating the depended test as the setup part for a setup/teardown is the best approach here. I don't like duplicating code, but it is unfortunately both a legitimate standalone test case as well as the setup condition for the negative test. – Selena Dec 01 '18 at 17:23
  • @KlausD. I am now seeing that your suggestion is probably the best approach given the lack of fully mature features in test dependencies and ordering. I'll have the depended-on test method as a standalone test case and then also use the same action verified in the depended-on test as a setup method for my negative test. – Selena Dec 01 '18 at 17:27
  • It is also important to understand that there is nothing like a negative test. But there are tests that check proper exception handling. – Klaus D. Dec 01 '18 at 19:27
  • @KlausD. True. I use the term as a useful label to help others understand which test cases have an expected successful outcome and which ones are supposed to result in an error condition or disallowed action. It's useful to help others who are in the weeds of this app see at a high level how well the functionality is covered overall. – Selena Dec 01 '18 at 20:02

0 Answers0