0

Is there a way to create a custom unit test result in TestNG/JUnit (or any other Java testing framework)? I understand that unit tests can either pass, or fail (or ignored), but currently I really would like to have the third option.

The company I'm working with right now has adapted the testing style of cleverly comparing screenshots of their application and so the test can either pass, fail, or diff, when the screenshots does not match with predetermined tolerance. In addition, they have their in house test "framework" and runners. This was done long before I joined.

What I would like to do is to migrate test framework to the one of the standard ones, but this process should be very gradual.

The approach I was thinking about was to create a special exception (e.g. DiffTolleranceExcededException), fail the test and then customize test result in the report.

GKalnytskyi
  • 579
  • 7
  • 16
  • You can do so, but you'll need to write your own reporting tools. None of the tools that support JUnit and TestNG (IDE integration, continuous integration tools etc.) can be easily updated to recognize the new test status. – yole Jan 18 '16 at 09:27
  • 1
    If it does match within the tolerance, it passes. If it does not match within the tolerance, it fails. Why do you need a third option? – Manu Jan 18 '16 at 09:30
  • @Manu, test can fail due to exception. For selenium, that might be TimeOutException, or ElementNotFoundException, and fail the test this way. – GKalnytskyi Jan 18 '16 at 10:10

2 Answers2

1

You should follow this approach for customize our test reports, adding a new column on test report and create your test report (with screenshot for example).

Community
  • 1
  • 1
Vokail
  • 630
  • 8
  • 27
1

Maybe you already mean the following with

The approach I was thinking about was to create a special exception (e.g. DiffTolleranceExcededException), fail the test and then customize test result in the report.

but just in case: You certainly can use the possibility to give a pre-defined message string to the assertions. In your case, if the screenshots are identical, the tests pass. If they are too different, the tests just fail. If they are within tolerance, you make them fail with a message like "DIFFERENT BUT WITHIN-TOLERANCE" or whatever - these failures are then easily distinguishable. Certainly, you could also invert the logic: Add a message to the failures that are not within the tolerance, to make these visually prominent.

Dirk Herrmann
  • 5,550
  • 1
  • 21
  • 47