The Story:
I'm currently in the process of unit-testing a function using hypothesis
and a custom generation strategy trying to find a specific input to "break" my current solution. Here is how my test looks like:
from solution import answer
# skipping mystrategy definition - not relevant
@given(mystrategy)
def test(l):
assert answer(l) in {0, 1, 2}
Basically, I'm looking for possible inputs when answer()
function does not return 0 or 1 or 2.
Here is how my current workflow looks like:
- run the test
hypothesis
finds an input that produces anAssertionError
:$ pytest test.py =========================================== test session starts ============================================ ... ------------------------------------------------ Hypothesis ------------------------------------------------ Falsifying example: test(l=[[0], [1]])
debug the function with this particular input trying to understand if this input/output is a legitimate one and the function worked correctly
The Question:
How can I skip this falsifying generated example ([[0], [1]]
in this case) and ask hypothesis
to generate me a different one?
The Question can also be interpreted: Can I ask hypothesis
to not terminate if a falsifying example found and generate more falsifying examples instead?