0

Short context:

In my company, we use a script to produce a series of PDF files. This script fetches the PDFs it should build from a JSON file. I was tasked with creating an automated test so we catch missing files when the build (and the script) are run in Jenkins. I've created a test that loads the JSON files and uses the information to call a step that checks if the file exists. My problem is that I want to actually reproduce the behaviour I get when using a scenario outline, which is to not only test for a value, but print to the output which files were tested. In a nutshell: I want a Scenario Outline with dynamic examples (unfortunately the linked question DOES NOT provide an answer).

Say that I have the following feature file:

Feature: Verify squared numbers

  Scenario Outline: Verify square for <number>
    Then the <number> squared is <result>

Examples:
  | number | result |
  |   1    |    1   | 
  |   2    |    4   |
  |   3    |    9   |
  |   4    |   16   |

And step file:

from behave import step

@step('the {number:d} squared is {result:d}')
def step_impl(context, number, result):
    assert number*number == result

I get

Feature: Verify squared numbers # x.feature:1

  Scenario Outline: Verify square for 1 -- @1.1   # x.feature:8
    Then the 1 squared is 1                       # steps/x.py:10

  Scenario Outline: Verify square for 2 -- @1.2   # x.feature:9
    Then the 2 squared is 4                       # steps/x.py:10

  Scenario Outline: Verify square for 3 -- @1.3   # x.feature:10
    Then the 3 squared is 9                       # steps/x.py:10

  Scenario Outline: Verify square for 4 -- @1.4   # x.feature:11
    Then the 4 squared is 16                      # steps/x.py:10

  Scenario: Verify something              # x.feature:13
    Given I use the data from "data.json" # steps/x.py:5
    Then everything is allright           # steps/x.py:14

1 feature passed, 0 failed, 0 skipped
5 scenarios passed, 0 failed, 0 skipped
6 steps passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.010s

This is pretty nice, I can see which values were tested (more importantly, these are captured in the test reports I'm exporting).

Now I've changed my feature to this:

Feature: Verify squared numbers

  Scenario: Verify something 
    Given I use the data from "data.json"
     Then everything is alright

And my step file:

from behave import step
import json

@step('I use the data from "{file}"')
def step_impl(context, file):
    with open(file) as json_file:
        context.json_data = json.load(json_file)

@step('the {number:d} squared is {result:d}')
def step_impl(context, number, result):
    assert number*number == result

@step('everything is alright')
def step_impl(context):
    for number, result in context.json_data.items():
        context.execute_steps(f'Then the {number} squared is {result}')

data.json:

{"1": 1, "2": 4, "3": 9}

And I get

Feature: Verify squared numbers # x.feature:1

  Scenario: Verify something              # x.feature:3
    Given I use the data from "data.json" # steps/x.py:5
    Then everything is alright            # steps/x.py:14

1 feature passed, 0 failed, 0 skipped
1 scenario passed, 0 failed, 0 skipped
2 steps passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.006s

Which is only one test, regardless of the number of steps that were executed.

How can I get an output that's similar to the one I get with the scenario outline?

Many thanks!!

Leonardo
  • 1,533
  • 17
  • 28

1 Answers1

0

I think this is not supported currently. There are some ways to do that with jinja template to auto generate a scenario outline. However, I do something a bit different to get all of the results instead of failing at first assertion - collect all incorrect values and then assert all of that:

@step('everything is alright')
def step_impl(context):
    results = []
    for number, result in context.json_data.items():
        if int(number)*int(number) != result:
            results.append({"expected": int(number)*int(number), "actual": result})

    assert not results, f'Wrongly calculated values: {results}'

Results will be like:

    Then everything is alright            # features/steps/all_steps.py:17 0.000s
      Assertion Failed: Wrongly calculated values: [{'expected': 9, 'actual': 8}, {'expected': 16, 'actual': 11}]

I know that the printout is not what you would like to see, but at least some solution that you could extend/change.

If you just want see in the console of what's going on - maybe just simple logging:

logging.info(f'Then the {number} squared is {result}')

Then need to run behave with --no-logcapture param. Perhaps, you already tried it. Output will be like:

  Scenario: Verify something              # features/square_numbers.feature:3
    Given I use the data from "data.json" # features/steps/all_steps.py:6 0.000s
    Then everything is alright            # features/steps/all_steps.py:17
INFO:root:Then the 1 squared is 1
INFO:root:Then the 2 squared is 4
INFO:root:Then the 3 squared is 8
    Then everything is alright     
automationleg
  • 323
  • 6
  • 11
  • I actually found a way to do this, posted my answer here https://stackoverflow.com/a/66976884/6271889 – Leonardo Apr 15 '21 at 21:26