3

I use Pytest 3.0 (under Python 3.4) to do integration testing of a legacy application. I have written fixtures for log files and ZeroMQ, so at any failure in the test cases the fixtures print out the contents of all log files, and all ZeroMQ communication.

I am using the pattern described in the documentation by defining the pytest_runtest_makereport() function in conftest.py. It works fine, and prints the contents of log files and ZeroMQ communication when there is a failure in the test case.

However for some of the failures of the application I'm testing, the only output is an error text in one of the log files. So after each test case I need to scan the resulting log files for the word "error".

I understand from Run code before and after each test in py.test? that it is somewhat controversial to fail a test during teardown, but nevertheless I manage to do that using pytest.fail(). Pytest reports the outcome as ERROR.

The problem is that when I fail the test from the teardown, the my additional reports (log files and ZeroMQ) are not shown. Any ideas how to solve this? Is there a better way than failing the test during teardown? I would like to avoid calling some function to check the log files in each and every test case.

My fixture looks something like:

@pytest.fixture()
def experimental_fix(request):
    print("**** SETUP ****")
    yield
    print("**** TEARDOWN ****")

    setup_report = request.node.rep_setup
    if setup_report.outcome == "passed":
        # Actually, read logfile contents here
        call_report = request.node.rep_call
        call_report.sections.append(('Logfiles', 'LOGFILE CONTENTS GOES HERE'))

    if True: # Actually, do checking of logfile contents
        pytest.fail("Illegal word found in logfile. See printout.")

        # I have also tried these:
        # raise ValueError("Illegal word found in logfile. See printout.")
        # assert 0, "Illegal word found in logfile. See printout."

A dummy test case:

def test_fail_and_add_report(experimental_fix):
    print("AAA")
    #1/0   
Community
  • 1
  • 1
jonasberg
  • 1,835
  • 2
  • 14
  • 14
  • If you know you shouldn't do it, why are you hacking it in anyway? If you need to verify something as part of the test, **do it as part of the test**. *"I would like to avoid calling some function to check the logfiles in each and every test case"* - why? If that's part of the test, call the function. – jonrsharpe Sep 27 '16 at 12:35
  • 2
    Now you know why it's "controversial". – Stop harming Monica Sep 27 '16 at 12:57
  • This is not an answer to your question but the proper way of factoring out repetitive assertions is using decorators. If you feel too lazy to decorate every test you can do it programatically. It's explained in an answer in the same question you linked: http://stackoverflow.com/a/22636117/2142055 – Stop harming Monica Sep 27 '16 at 13:12

0 Answers0