8

Short Question
Is it possible to re-attempt a unittest upon failure / error N number of times OR based on a predefined function. (Like a user's prompt)

Background
To avoid retyping an entire page of system information, please see SO question on passing data to unittests and on auto test discovery for more details on my physical set up.

Regarding the question at hand, I know I can do it by re-writting my test cases to loop until it gets the required results (see pseudo code below) then assert based off of this. However I rather not go and rewrite 100's of test cases.

I there will be someone that will point out that if a unittest fails, it should just fail and be done. I agree with this 100% if human error could be removed. This is a physical system that I am connected to and there are many times the leads from the digital multimeter are not connected well and it could fail because of a loose connection.

Pseudo Workaround

class Suite_VoltageRegulator(unittest.TestCase):
    def test_voltage_5v_regulator(self):   
        keep_running = 'y'
        error_detected = False

        print '\n'     
        # Display User Test Configuration
        msg = \
            '1) Connect Front DMM GND(black) to the TP_COM\n' +\
            '2) Connect Front DMM POS(red) to the TP-A\n' +\
            '3) Ensure the DMM terminal button indicates FRONT'

        continue_test = prompt.Prompt_Ok_Cancel('User Action Required!', msg)

        if not continue_test:
            self.assertTrue(False, 'User Canceled Test!')

        while(keep_running == 'y'):
            try:
                # Run the test
                results = measure_voltage_from_system() 

                # Analyze Results
                test_status = pf.value_within_range(results, REGULATOR_5V_LOW, REGULATOR_5V_HIGH)

            catch:
                error_detected = True

            # Retest Failed Cards
            if(test_status == False):
                keep_running = rawinput('Test FAILED: Retry Test? (y/n):')
            elif(error_detected == True):
                keep_running = rawinput('Test ERROR: Retry Test? (y/n):')
            else:
                keep_running = 'n'

        # Inform the user on the test results 
        self.assertTrue(test_status,  'FAIL: 5V Regulator (' +str(results)+ ') Out of Range!')

EDIT 8/22/11 3:30 PM CST
I do know that I am violating the definition of the unittest in this use-case. These issues / comments are also addressed in a few of my other SO questions. One of the design goals we chose was to leverage an existing framework to avoid having to "reinvent the wheel". The fact that we chose python's unittest was not based on it's definition, but it's flexibility and robustness to execute and display a series of tests.

Going into to this project, I knew there would be some things that would require workarounds because this module was not intended for this use. At this point in time, I still believe that these workarounds have been easier / cheaper than rewriting my own test runner.

EDIT 8/22/11 5:22 PM CST
I am not dead set on using unittest for future projects in this manner, however I fairly set on using an existing frame work to avoid duplicating someone else's efforts. A comment below is an example of this pycopia-QA appears to be a good fit for this project. The only drawback for my current project is it is that I have written hundreds unittest test-cases, if I were to rewrite them it would be a very large undertaking (noting that it will also be a non-funded effort)

EDIT 8/24/11 11:00 AM CST
It may be clear for future projects to switch to a more tailored frame work for this type of testing. However I still have projects running with unittest so a solution using only unittest (or nose + 3rd addon) is still needed.

Community
  • 1
  • 1
Adam Lewis
  • 7,017
  • 7
  • 44
  • 62
  • 1
    I think we are little bit violating the definition of unit test here - unit test should be independent and always give the same result when run. Since it is connected to physical system this is not the case. I think instead of trying to use unit test framework just write a normal Python script which stresses the physical interface and forget trying to force it to the limitations of unittest.TestCase framework. – Mikko Ohtamaa Aug 22 '11 at 20:11
  • Also I know problems like this exist in the performance testing in web world and they have specific tools, e.g. JMeter, which produce statistical results over several serial or concurrent test requests. However, I don't know if anything similar exists in the embedded world. – Mikko Ohtamaa Aug 22 '11 at 20:12
  • Ah hah. You chose to go against the current then :) (not talking aobut electricity here, I hope....) – Mikko Ohtamaa Aug 22 '11 at 21:32
  • @Mikko: Well the tests I am trying to re-run here are to prevent things like that from happening :) (talking about electricity here) – Adam Lewis Aug 22 '11 at 21:37
  • I think we are clearly talking about using python's unittest framework as a manual systemtesting setup. This is clearly NOT a unit test, but it is either systems or integration testing. – Arafangion Aug 22 '11 at 21:39
  • @Arafangion: Exactly. Is there something I could add to the question to help avoid confusion? – Adam Lewis Aug 22 '11 at 21:44

7 Answers7

5

4 years after the original question - I hope that anyone would care :) Here's my solution for doing this on top of unittest. It's kind of ugly and relies on the implementation of the TestCase base class , but it works.

class MyTest(unittest.TestCase):
    ###
    ### Insert test methods here
    ###

    # Wrapping each test method so that a retry would take place.  
    def run(self, result=None):
        self.origTestMethodName = self._testMethodName
        self._testMethodName = "_testRetryWrapper"
        super(MyTest, self).run(result)
        self._testMethodName = self.origTestMethodName

    def _testRetryWrapper(self):
        testMethod = getattr(self, self.origTestMethodName)
        retryAttemptsLeft = settings.testRetryCount

        while True:
            try:
                testMethod()
                break
            except:
                if retryAttemptsLeft == 0:
                    raise
                else:
                    retryAttemptsLeft = retryAttemptsLeft - 1
Shlomi Király
  • 531
  • 1
  • 5
  • 12
  • Doesn't this break e.g. `@unittest.skip`, because the `__unittest_skip__` attribute on the real test function won't be found by `run`? – Carl Meyer Sep 01 '16 at 18:51
4

The Python unittest module in intended for writing Python unit tests. ;-) It's not so well suited for other kinds of testing. The nose package is also a unit test framework.

I have written several testing frameworks in Python that are designed to test systems. The systems can be distributed, and automated with various interfaces. Two are open-source.

The Pycopia project is a collection of Python modules that runs on Linux. It is provided as a collection of namespace subpackages, one of which is the QA package that is a testing framework.

A subset-fork of this is named powerdroid, and it is intended to control instrumentation for taking physical measurements (such as voltage, current, etc.) via. RS-232, IEEE-488,etc. It provides an alternative Python interface to the linux-gpib project.

So you may start with these, rather than "reinvent the wheel", if you want. You may not have to throw away existing tests, since the framework can invoke any subprocess you can start existing tests with it. This also runs on Linux.

Keith
  • 42,110
  • 11
  • 57
  • 76
3

I have improved Shlomi Király's answer slightly so that it doesn't violate with the unittest framework and skipping testcases still works:

class MyTest(unittest.TestCase):

#Eanble retries if specified in configuration file by attribute testRetryCount
def run(self, result=None):
    self.origTestMethodName = self._testMethodName
    retryAttemptsLeft = configuration.testRetryCount

    failuresBefore = len(result.failures) #check how many tests that are marked as failed before starting
    errorsBefore = len(result.errors) #check how many tests that are marked as failed before starting

    super(MyTest, self).run(result)
    if failuresBefore < len(result.failures): # If last test failed
        while True:
            if retryAttemptsLeft == 0:
                self.logger.error("Test failed after "+str(configuration.testRetryCount+1)+" attempts")
                break
            else:
                result.failures.pop(-1) #Removing last failure result
                self.logger.error("Test failed - retryAttemptsLeft: "+str(retryAttemptsLeft))
                retryAttemptsLeft = retryAttemptsLeft - 1

                super(MyTest, self).run(result)

    elif errorsBefore < len(result.errors): # If last test failed due to error
        while True:
            if retryAttemptsLeft == 0:
                self.logger.error("Test error after "+str(configuration.testRetryCount+1)+" attempts")
                break
            else:
                result.errors.pop(-1) #Removing last error result
                self.logger.error("Test error - retryAttemptsLeft: "+str(retryAttemptsLeft))
                retryAttemptsLeft = retryAttemptsLeft - 1

                super(MyTest, self).run(result)
1

Here is a bit dirty, yet very simple solution. Use unittest to run the test along with other tests and to detect a potential crash. But instead of using the builtin assertions, write your own code with if and raise Exception. Then retry with tenacity. For example:

from tenacity import retry


class MyTestCase(unittest.TestCase):
    @retry
    def test_with_retry:
        if not a == b:
            raise Exception("The test has failed")
Nikolay Shindarov
  • 1,616
  • 2
  • 18
  • 25
1

I think that you should write your own specialised framework - you might as well model it on python's unit test, but this is clearly not a unit test. You will need to make some changes, eg, to allow each test to be individually skipped, retried, or reverified - and then likely, at the end, to have an option to revisit and retest failed tests with statistics at the end showing how many tests passed the first time, how long they took, which test took the longest, how many retries they needed to take, and so on.

Arafangion
  • 11,517
  • 1
  • 40
  • 72
  • 1
    This is good advise. In fact, it has already been done by *moi*, [check it out](http://code.google.com/p/pycopia/). It's the QA subpackage. It's free, open-source. – Keith Aug 22 '11 at 21:47
  • 1
    Can you explain a way to base it off the unittest framework without having to rewrite a *LARGE* majority of it? Something that was not mentioned above is that by using the unittest module I can leverage 3rd party modules such as nose. – Adam Lewis Aug 22 '11 at 21:49
  • 1
    @Keith: This is exactly the kind of thing I have been looking for. The fact that you said that it has already been done drives the point home of me not wanting to roll out my own specialized framework. If you wouldn't mind, can you add this as an answer? – Adam Lewis Aug 22 '11 at 21:55
0

Turns out there is a very simple (yet private) wrapper for calling the test method, and it's quite easy to overwrite it yourself and easy to understand.

But if you don't mind adding a decorator to all tests methods, I would rather go with a decorator as shown in the other answer, because it doesn't involve overwriting private methods.

Better than retrying tests until they work would be of course to have a really good tearDown method that makes sure tests are not influencing each other, proper tests that are deterministic and code that doesn't suffer from a million race conditions.

class Test(unittest.TestCase):
    def _callTestMethod(self, method):
        attempts = 0
        max_attempts = 2
        while True:
            attempts += 1
            try:
                method()
                break
            except Exception as e:
                if attempts == max_attempts:
                    raise e

            # try again
            self.tearDown()
            self.setUp()

    def test_foo(self):
        # your test goes here, it will be repeated automatically if it fails
sezanzeb
  • 816
  • 8
  • 20
0

It is useful to distinguish between two cases

  1. Test is flaky. There is a way to write robust test, but we did not do it (yet), we may not exactly understand why test is flaky, and therefore we need to retry (the entire test)
  2. We test an action that needs to be awaited and there is no way to be reliably informed when it is completed, therefore we retry (a single check in the test)

For the first case, I am using

def flakey(issue: str, repeats: int = 3):
    """Decorator that marks test as flakey (flaky).
    If applied, executes the test up to three times, marking it as failed only if it fails each time.
    Note, that use of this decorator is generally discouraged---tests should pass reliably when their assertions are upheld.
    Example usage
        @flakey(issue='#123')
        def test_feature_in_fragile_manner():
            self.assertTrue(...)
    """
    del issue  # unused

    def decorator(f):
        @functools.wraps(f)
        def inner(*args, **kwargs):
            # Python 3 clears the exception upon leaving the `except` block
            # https://docs.python.org/3/reference/compound_stmts.html#the-try-statement
            preserved_exc = None
            for _ in range(repeats):
                try:
                    return f(*args, **kwargs)
                except AssertionError as exc:
                    preserved_exc = exc
                    logging.info("Test marked as flaky has failed: %s", exc)
            raise AssertionError("Flaky test has failed too many times") from preserved_exc

        return inner

    return decorator

The main features of this solution are

  1. comment that discourages programmers from using this (borrowed from https://bazel.build/reference/be/common-definitions#test.flaky)
  2. mandatory parameter to enter issue ID (task to fix the flaky test)
  3. no unnecessary sleeps, exponential retries, jitter
user7610
  • 25,267
  • 15
  • 124
  • 150