1

I'd like to add metadata to individual tests in a TestCase that I've written to use Python's unittest framework. The metadata (a string, really) needs to be carried through the testing process and output to an XML file.

Other than remaining with the test the data isn't going to be used by unittest, nor my test code. (I've got a program that will run afterwards, open the XML file, and go looking for the metadata/string).

I've previously used NUnit which allows one to use C# attribute to do this. Specifically, you can put this above a class:

[Property("SmartArrayAOD", -3)]

and then later find that in the XML output.

Is it possible to attach metadata to a test in Python's unittest?

MikeTheTall
  • 3,214
  • 4
  • 31
  • 40

1 Answers1

0

Simple way for just dumping XML

If all you want to do is write stuff to an XML file after every unit test, just add a tearDown method to your test class (e.g. if you have , give it a).

class MyTest(unittest.TestCase):
    def tearDown(self):
        dump_xml_however_you_do()

    def test_whatever(self):
        pass

General method

If you want a general way to collect and track metadata from all your tests and return it at the end, try creating an astropy table in your test class's __init__() and adding rows to it during tearDown(), then extracting a reference to your initialized instances of your test class from unittest, like this:

Step 1: set up a re-usable subclass of unittest.TestCase so we don't have to duplicate the table handling

(put all the example code in the same file or copy the imports)

"""
Demonstration of adding and retrieving meta data from python unittest tests
"""

import sys
import warnings
import unittest
import copy
import time
import astropy
import astropy.table
if sys.version_info < (3, 0):
    from StringIO import StringIO
else:
    from io import StringIO


class DemoTest(unittest.TestCase):
    """
    Demonstrates setup of an astropy table in __init__, adding data to the table in tearDown
    """

    def __init__(self, *args, **kwargs):
        super(DemoTest, self).__init__(*args, **kwargs)

        # Storing results in a list made it convenient to aggregate them later
        self.results_tables = [astropy.table.Table(
            names=('Name', 'Result', 'Time', 'Notes'),
            dtype=('S50', 'S30', 'f8', 'S50'),
        )]
        self.results_tables[0]['Time'].unit = 'ms'
        self.results_tables[0]['Time'].format = '0.3e'

        self.test_timing_t0 = 0
        self.test_timing_t1 = 0

    def setUp(self):
        self.test_timing_t0 = time.time()

    def tearDown(self):
        test_name = '.'.join(self.id().split('.')[-2:])
        self.test_timing_t1 = time.time()
        dt = self.test_timing_t1 - self.test_timing_t0

        # Check for errors/failures in order to get state & description.  https://stackoverflow.com/a/39606065/6605826
        if hasattr(self, '_outcome'):  # Python 3.4+
            result = self.defaultTestResult()  # these 2 methods have no side effects
            self._feedErrorsToResult(result, self._outcome.errors)
            problem = result.errors or result.failures
            state = not problem
            if result.errors:
                exc_note = result.errors[0][1].split('\n')[-2]
            elif result.failures:
                exc_note = result.failures[0][1].split('\n')[-2]
            else:
                exc_note = ''
        else:  # Python 3.2 - 3.3 or 3.0 - 3.1 and 2.7
            # result = getattr(self, '_outcomeForDoCleanups', self._resultForDoCleanups)  # DOESN'T WORK RELIABLY
            # This is probably only good for python 2.x, meaning python 3.0, 3.1, 3.2, 3.3 are not supported.
            exc_type, exc_value, exc_traceback = sys.exc_info()
            state = exc_type is None
            exc_note = '' if exc_value is None else '{}: {}'.format(exc_type.__name__, exc_value)

        # Add a row to the results table
        self.results_tables[0].add_row()
        self.results_tables[0][-1]['Time'] = dt*1000  # Convert to ms
        self.results_tables[0][-1]['Result'] = 'pass' if state else 'FAIL'
        with warnings.catch_warnings():
            warnings.filterwarnings('ignore', category=astropy.table.StringTruncateWarning)
            self.results_tables[0][-1]['Name'] = test_name
            self.results_tables[0][-1]['Notes'] = exc_note

Step 2: set up a test manager that extracts metadata

def manage_tests(tests):
    """
    Function for running tests and extracting meta data
    :param tests: list of classes sub-classed from DemoTest

    :return: (TextTestResult, Table, string)
        result returned by unittest
        astropy table
        string: formatted version of the table

    """
    table_sorting_columns = ['Result', 'Time']

    # Build test suite
    suite_list = []
    for test in tests:
        suite_list.append(unittest.TestLoader().loadTestsFromTestCase(test))
    combo_suite = unittest.TestSuite(suite_list)

    # Run tests
    results = [unittest.TextTestRunner(verbosity=1, stream=StringIO(), failfast=False).run(combo_suite)]

    # Catch test classes
    suite_tests = []
    for suite in suite_list:
        suite_tests += suite._tests

    # Collect results tables
    results_tables = []
    for suite_test in suite_tests:
        if getattr(suite_test, 'results_tables', [None])[0] is not None:
            results_tables += copy.copy(suite_test.results_tables)

    # Process tables, if any
    if len(results_tables):
        a = []
        while (len(a) == 0) and len(results_tables):
            a = results_tables.pop(0)  # Skip empty tables, if any
        results_table = a
        for rt in results_tables:
            if len(rt):
                with warnings.catch_warnings():
                    warnings.filterwarnings('ignore', category=DeprecationWarning)
                    results_table = astropy.table.join(results_table, rt, join_type='outer')
        try:
            results_table = results_table.group_by(table_sorting_columns)
        except Exception:
            print('Error sorting test results table. Columns may not be in the preferred order.')
        column_names = list(results_table.columns.keys())
        alignments = ['<' if cn == 'Notes' else '>' for cn in column_names]
        if len(results_table):
            rtf = '\n'.join(results_table.pformat(align=alignments, max_width=-1))
            exp_res = sum([result.testsRun - len(result.skipped) for result in results])
            if len(results_table) != exp_res:
                print('ERROR forming results table. Expected {} results, but table length is {}.'.format(
                    exp_res, len(results_table),
                ))
        else:
            rtf = None

    else:
        results_table = rtf = None

    return results, results_table, rtf

Step 3: Example usage

class FunTest1(DemoTest):
    @staticmethod
    def test_pass_1():
        pass

    @staticmethod
    def test_fail_1():
        assert False, 'Meant to fail for demo 1'


class FunTest2(DemoTest):
    @staticmethod
    def test_pass_2():
        pass

    @staticmethod
    def test_fail_2():
        assert False, 'Meant to fail for demo 2'


res, tab, form = manage_tests([FunTest1, FunTest2])
print(form)
print('')
for r in res:
    print(r)
    for error in r.errors:
        print(error[0])
        print(error[1])

Sample results:

$ python unittest_metadata.py 
        Name         Result    Time                    Notes                  
                                ms                                            
-------------------- ------ --------- ----------------------------------------
FunTest2.test_fail_2   FAIL 5.412e-02 AssertionError: Meant to fail for demo 2
FunTest1.test_fail_1   FAIL 1.118e-01 AssertionError: Meant to fail for demo 1
FunTest2.test_pass_2   pass 6.199e-03                                         
FunTest1.test_pass_1   pass 6.914e-03                                         

<unittest.runner.TextTestResult run=4 errors=0 failures=2>

Should work with python 2.7 or 3.7. You can add whatever columns you want to the table. You can add parameters and stuff to the table in setUp, tearDown, or even during the tests.

Warnings:

This solution accesses a protected attribute _tests of unittest.suite.TestSuite, which can have unexpected results. This specific implementation works as expected for me in python2.7 and python3.7, but slight variations on how the suite is built and interrogated can easily lead to strange things happening. I couldn't figure out a different way to extract references to the instances of my classes that unittest uses, though.

EL_DON
  • 1,416
  • 1
  • 19
  • 34