0

I'm looking to test a software product with Behave. To start with, I will attempt to install the software package, and then perform tests on it. Clearly, if the package doesn't install, then there's no point doing all the other tests (and I do have to test the packaging, as much as the software). Is there a way to either make all other tests dependent on the first ones, or else auto-fail all the others if one of the first tests fails?

I'm aware of the --stop option, but if my package installs, I want to run all further tests, and if some fail, then I want the rest to continue to run, so I can't use the --stop option, and I see the Behave devs don't want that functionality to be "programmable" in the tests.

I'm also aware that tests shouldn't depend on each other. I'm slightly at a loss how to approach the problem without some amount of this though. I'm happy to make each test isolated from the others by setting up all the things I need to perform the test first, but it's a bit impractical to make the installation part of that plan.

Likewise, I could put all my tests into a single scenario and use the background to do the installation. This also is rather impractical(!).

Is there a way to put tests in a hierarchy, or some other map of dependencies? Or is there a completely different way to approach a problem such as this?

Ralph Bolton
  • 774
  • 7
  • 14

1 Answers1

0

For me, the easiest way to achieve that is using the before_feature environment hook, where you can check if certain flag is active, and if it is, you skip the scenario as is explained here (How do I skip a test in the behave python BDD framework?)

A pseudocode example would be:

Scenario A:
   Given Step 1
   When Step 2
   Then Step 3

Scenario B:
   Given Step 4
   When Step 5
   Then Step 6

@Step(Step 3)
def impl(context):
   if package_is_not_installed:
      context.skip_all_scenarios = True

@before_scenario
def impl(context, scenario):
   if context.skip_all_scenarios is True:
      scenario.skip("The package is not installed")