Short context:
In my company, we use a script to produce a series of PDF files. This script fetches the PDFs it should build from a JSON file. I was tasked with creating an automated test so we catch missing files when the build (and the script) are run in Jenkins. I've created a test that loads the JSON files and uses the information to call a step that checks if the file exists. My problem is that I want to actually reproduce the behaviour I get when using a scenario outline, which is to not only test for a value, but print to the output which files were tested. In a nutshell: I want a Scenario Outline with dynamic examples (unfortunately the linked question DOES NOT provide an answer).
Say that I have the following feature file:
Feature: Verify squared numbers
Scenario Outline: Verify square for <number>
Then the <number> squared is <result>
Examples:
| number | result |
| 1 | 1 |
| 2 | 4 |
| 3 | 9 |
| 4 | 16 |
And step file:
from behave import step
@step('the {number:d} squared is {result:d}')
def step_impl(context, number, result):
assert number*number == result
I get
Feature: Verify squared numbers # x.feature:1
Scenario Outline: Verify square for 1 -- @1.1 # x.feature:8
Then the 1 squared is 1 # steps/x.py:10
Scenario Outline: Verify square for 2 -- @1.2 # x.feature:9
Then the 2 squared is 4 # steps/x.py:10
Scenario Outline: Verify square for 3 -- @1.3 # x.feature:10
Then the 3 squared is 9 # steps/x.py:10
Scenario Outline: Verify square for 4 -- @1.4 # x.feature:11
Then the 4 squared is 16 # steps/x.py:10
Scenario: Verify something # x.feature:13
Given I use the data from "data.json" # steps/x.py:5
Then everything is allright # steps/x.py:14
1 feature passed, 0 failed, 0 skipped
5 scenarios passed, 0 failed, 0 skipped
6 steps passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.010s
This is pretty nice, I can see which values were tested (more importantly, these are captured in the test reports I'm exporting).
Now I've changed my feature to this:
Feature: Verify squared numbers
Scenario: Verify something
Given I use the data from "data.json"
Then everything is alright
And my step file:
from behave import step
import json
@step('I use the data from "{file}"')
def step_impl(context, file):
with open(file) as json_file:
context.json_data = json.load(json_file)
@step('the {number:d} squared is {result:d}')
def step_impl(context, number, result):
assert number*number == result
@step('everything is alright')
def step_impl(context):
for number, result in context.json_data.items():
context.execute_steps(f'Then the {number} squared is {result}')
data.json
:
{"1": 1, "2": 4, "3": 9}
And I get
Feature: Verify squared numbers # x.feature:1
Scenario: Verify something # x.feature:3
Given I use the data from "data.json" # steps/x.py:5
Then everything is alright # steps/x.py:14
1 feature passed, 0 failed, 0 skipped
1 scenario passed, 0 failed, 0 skipped
2 steps passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.006s
Which is only one test, regardless of the number of steps that were executed.
How can I get an output that's similar to the one I get with the scenario outline?
Many thanks!!