4

We run automated NUnit tests on our C# projects using hudson/jenkins on several virtual machines which run mostly unattended on some server. The tests involve starting several processes that exchange data, one of which is NUnit itself, the others created by the unit test.

Sometimes, one of the developer checks in something that triggers an assertion (Debug.Assert()). This then pops up a message box, asking the user what to do. Usually those happen in one of the "external" processes created by the unit tests. They will block that process while the other processes give up, because they can't communicate. However, due to the nature of the system, the next tests will all fail, too, as long as that one process is blocked waiting for someone to click away that message box.

I've been told that you can change the settings for a .NET program so that an assertion won't pop up a message box. Ideally, the process would just write something to stdout or stderr, for Jenkins to record.

So what do I have to do to turn off those interactive assertion dialogs?

sbi
  • 219,715
  • 46
  • 258
  • 445

2 Answers2

6

You need to implement System.Diagnostics.TraceListener that will not pop up dialog on Fail (i.e. you can report error to unit test framework) and add this listener instead of default one by using Listeners.Clear/Add

public class MyListenerThatDoesNotShowDialogOnFail: System.Diagnostics.TraceListener
{....
    public override void Fail(string message, string detailMessage)
    {// do soemthing UnitTest friendly here
    }

}

System.Diagnostics.Debug.Listeners.Clear();
System.Diagnostics.Debug.Listeners.Add(new MyListenerThatDoesNotShowDialogOnFail());

This code should be in your Unit test setup portion. This way regular debug build will show assert dialogs, but while running Unit tests it will do something sensible for the test (like Assert.Fail). Note that you should consider restoring original listeners in test's teardown methods.

Alexei Levenkov
  • 98,904
  • 14
  • 127
  • 179
-2

Do not test the Debug version of the library. You want to know what fails when it is running on the customer's machine, that will be the Release version. Automatically solves your problem with asserts.

Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
  • We test both. Testing the Debug version is done for getting extensive diagnostics if something blows up. Testing the Release version is done for testing under real conditions. – sbi Mar 31 '11 at 18:18
  • I suppose it is pointless for me to point out that it is pointless. These 'extensive diagnostics' belong in the unit test. – Hans Passant Mar 31 '11 at 18:29
  • Erm, I am talking about the unit tests. What are you talking about? – sbi Mar 31 '11 at 18:35
  • Erm, you do different tests in the Debug vs Release version of the unit test? Why would you do that? – Hans Passant Mar 31 '11 at 18:41
  • No, we do exactly the same tests. Only in Debug the diagnostics are much more comprehensive. (This is a distributed, concurrent system. Usually, you can't debug this, you have to read log files.) And, assertions are triggered only in Debug mode. That's fine (that's what they are there for), but they shouldn't wreck the rest of the tests. – sbi Mar 31 '11 at 18:45
  • 1
    Well, you have an unusual approach. But you're stuck with Assert being active as long as DEBUG is defined. The only practical option you have to remove the default trace listener so there's nobody to complain. The docs for DefaultTraceListener shows you how to do it with a .config file. You can also do it in code. – Hans Passant Mar 31 '11 at 18:56
  • Are you deliberately trying to misread me? Why can't you _first read my question_ (which speaks about unit tests), and then answer? Why aren't you getting that we _want_ these assertions to trigger? That we are fine with those tests blowing up? That our only problem is that it makes all other tests fail, too? What is so unusual about running unit test for several configurations? Why haven't you answered to _any_ of my arguments towards your critique, but instead keep coming up with other critique, again without making the effort to first understand the problem, and again failing the mark? – sbi Mar 31 '11 at 21:16
  • If you don't take the time to carefully read the question, ask your questions first in case something's unclear, and only answer after you understood the problem and know the answer, why are you answering at all? It's not as if you needed the rep. Or is _this_ the way to get so much rep? Posting a cocky "you're doing it all wrong" answer that's bound to get a few upvotes and is much easier than actually trying to deal with the problem? – sbi Mar 31 '11 at 21:22
  • Hmm, just trying to be helpful. A clumsy attempt to help doesn't quite equate to telling you that your hair looks funny, no need to take offense. Clearly I have no clue what you really need. Good luck with it. – Hans Passant Mar 31 '11 at 21:24
  • @Hans: What I have now is your remark our approach would be "unusual", but no idea why, and I can't help noticing that you again avoided answering my question regarding this – sbi Mar 31 '11 at 21:42