4

I would like to test the features of an embedded device. To simplify I can say it is an humanoid robot remotely controlled by a PC through a C/C++ API.

I am very interested to use nosetests because of its non-boilerplate approach. However, my case is a bit more complicated. The actual test is running on a C# program and takes about 24h to complete. By switching to Python, I might save a lot of time developing new tests. But, before doing this, I am looking for some answers.

The first problem of the ancient test suite is that all tests are executing in a predefined order and if any error occur, the whole test stops. I would like to build independent test suites that does not depend on other tests results. For example, the test related to the arm of the robot has no relation with the one of the legs. However, the walk test needs both to be successful.

At night all test-suites are executed. If one fail, the next is executed and so on. The advantage is that on the Monday Morning when you come back to work, you can have more useful results than if the whole tests had already failed on Friday night 10 minutes after you left.

So am I looking for a test framework that allows:

  • Splitting the tests in test-suites.
  • Giving a try to each each test-suite no matter if a previous one failed.
  • Giving information about dependencies on some tests.

I looked at Proboscis that allows dependencies fixtures, but the project look dead.

I am wondering how much work would it take to customize nose in order to get these features. Perhaps it worth also trying another test framework. I don't know and I need some clues...

So, in order to keep things as simple as possible, here's how I see my tests:

#!/usr/bin/python

def testArms():
   ...
   pass

def testLegs():
   ...
   pass

@depend(testArms, testLegs)
def testWalk():
   ...
   pass

test_suite1 = [testLegs, testArms, testWalk]

...
nowox
  • 25,978
  • 39
  • 143
  • 293

3 Answers3

5

Long time since this question was asked.

Embedded Systems present special characterists to implement acceptance testing automation (one of the most important is that, most likely, the "Device Under Test" is not the same device as the one executing the test cases; hence same kind of interaction interface is required). This is not "excatly" the case when doing test automation of a Web Page or a PC Application or even when running unit testing of an embedded software (which can also be executed outside of the device). Based on this assumption, I think a framework which is developed for doing unit testing is not the best tool to develop an Emedded System Test Bench for performing acceptance tests.

At the moment we are facing a similar situation trying to choose a development environment to implement automation testing for an embedded device. We are looking into:

  • Robot Framework, which is a generic acceptance test automation framework based on keyword-driven testing approach.

  • FitNesse (http://www.fitnesse.org)

  • Pycopia

There are also other tools that don`t use Python. For example the ones described in this thread (MxVDev)

Marcos
  • 139
  • 1
  • 6
4

I think the Robot Framework is the right tool for you. You can split your tests in test-suites and if one test fails the next test will run.

kame
  • 20,848
  • 33
  • 104
  • 159
2

Again, long time since this was asked, but I figured I could contribute.

We're currently building a complete test solution exactly aimed at testing embedded devices for verification & validation purposes. Our flagship implementation is based on Google's OpenHTF: https://github.com/google/openhtf

Here's the hello world example:

import openhtf as htf

from openhtf.output.callbacks import json_factory

from openhtf.plugs import user_input

@htf.measures(htf.Measurement('hello_world_measurement'))
def hello_world(test):
  """A hello world test phase."""
  test.logger.info('Hello World!')
  test.measurements.hello_world_measurement = 'Hello Again!'


if __name__ == '__main__':
  test = htf.Test(hello_world)
  test.add_output_callbacks(
      json_factory.OutputToJSON('./{dut_id}.hello_world.json', indent=2))

  test.execute(test_start=user_input.prompt_for_test_start())

You can extend OpenHTF with different modules:

  • "Plugs", which are interfaces with external equipment or device under test. I.e. a COM Port plug.
  • "Callbacks", which are custom export interfaces.

OpenHTF comes with its own GUI and definitely gives a huge headstart to development of either production testing testbenches or design validation/verification testing, as is the case for this question.

I'd be glad to help anyone in need of guidance.