4

This is the kind of question that treads the gray area between StackOverflow and SuperUser. As I suspect the answer is likely to involve code-related solutions, such as creative use of StopWatch, I'll ask it here.

I am using Visual Studio 2013 to perform unit testing against controllers that depend on an entity framework data model. The Test Explorer nicely presents my test results as well as their elapsed time in a short list.

I discovered, though, that my Controller unit tests were taking considerably longer than I expected. I began to suspect this was due to the initialization code that I use to create a mocked (using Moq, but that doesn't matter) entity model.

Sure enough, it was a small matter to show that initialization routines are included in the elapsed time.

[TestClass]
public class InitializeTest
{
    [TestInitialize]
    public void Initialize()
    {
        Thread.Sleep(10000);
    }

    [TestMethod]
    public void TestInitializeRuntime()
    {
        Assert.Inconclusive();
    }
}

This produced the following output in the test explorer:

Screenshot of Test Explorer

This renders the elapsed time of tests backed by my mocked entity model fairly non-useful, as initialization code generally consumes greater than 95% of the elapsed time of the test. Every method looks slow, when in fact they are not so.

Is there either an alternative configuration or some creative use of code (such as StopWatch, as mentioned earlier) that will allow me to report the elapsed time of test methods only, excluding time spent initializing or cleaning up the tests?

kbrimington
  • 25,142
  • 5
  • 62
  • 74
  • I'm not answering the question but it may be worth you looking at ANTS Performance Profiler from RedGate: http://documentation.red-gate.com/display/APP9/Profiling+tests+in+MSTest – Derek Tomes Feb 05 '15 at 01:00
  • possible duplicate of [How can I precisely time a test run in visual studio 2010](http://stackoverflow.com/questions/7815534/how-can-i-precisely-time-a-test-run-in-visual-studio-2010) – David Feb 05 '15 at 01:10
  • @Arovol, I disagree. That question deals explicitly with an older edition of Visual Studio. This question deals exclusively with the current major version. Both the IDE and the testing framework have evolved since then, so the answers may be out of date and incorrect. – kbrimington Feb 05 '15 at 01:12

4 Answers4

3

I discovered today a method of handling expensive initialization in MSTest without it causing the tests to report as slow. I post this answer for consideration without accepting it because it does have a modest code smell.

MSTest creates a new instance of the test class each time it runs a test. Because of this behavior, code written in an instance constructor occurs once per test. This is similar behavior to the [TestInitialize] method with one exception: MSTest begins timing the unit test after creating the instance of the test class and before executing the [TestInitialize] routine.

As a result of this MSTest-specific behavior, one can put initialization code that should be omitted from the automatically-generated timing statistics in the constructor.

To demonstrate what I mean, consider the following test and generated output.

Test:

public class ConstructorTest
{
    public ConstructorTest()
    {
        System.Threading.Thread.Sleep(10000);
    }

    [TestMethod]
    public void Index()
    {
    }

    [TestMethod]
    public void About()
    {
    }
}

Output:

Screenshot of results

My Thoughts:

The code above certainly produces the effect I was looking for; however, while it appears safe to use either a constructor or a [TestInitialize] method to handle initialization, I must assume that the latter exists in this framework for a good reason.

There might be a case made that reports including initialization time in their calculations might be useful, such as when estimating how much real time a large set of tests should be expected to consume.

Rich Turner's discussion about how time sensitive operations deserve stop watches with assertions is also worth recognizing (and has my vote). On the other hand, I look at the automatically generated timing reports provided by Visual Studio as a useful tool to identify tests that are getting out of hand without having to author timing boilerplate code in every test.

In all, I am pleased to have found a solution and appreciate the alternatives discussed here as well.

Cheers!

Community
  • 1
  • 1
kbrimington
  • 25,142
  • 5
  • 62
  • 74
  • Hi, kbrimington. I just asked very similar question. did you mange to find any other solution for this problem? for some reason, your solution doestnt work for me. In my case the time spend in ConstructorTest(following your code) is shown in results, do you have any idea why ? – zdebyman Apr 06 '16 at 22:31
  • @zdebyman, thank you for asking. This was the only solution I found at the time. I wonder why you have different behavior. At the time that I wrote this, I was using VS 2012 and VS2013. I just ran it in VS 2015 to see if anything had changed, and it works as described in the post. If you will please link your similar question, I'd be happy to take a look at it. – kbrimington Apr 07 '16 at 22:55
  • my question is this one: http://stackoverflow.com/questions/36463238. it is weird, that i cant have the same behavior in any VS (13/15). Basically i have this code : [TestClass] public class ConstructorTest { public ConstructorTest() { System.Threading.Thread.Sleep(10000); } [TestMethod] public void Index() { } [TestMethod] public void About() { } } Can you see anything wrong ? – zdebyman Apr 08 '16 at 18:15
2

Your tests should aim to answer questions. Questions such as: 1. Does my code behave as expected? 2. Does my code perform as expected?

However, rather than relying on the testing framework's inner workings to time your code (a task to which it is particularly unsuited), consider instead writing tests that test the performance of specific code and/or routines.

You could, for example, write a test method which starts a stopwatch, performs some work, stops the stopwatch and measures how long the operation took. You should then be able to assert that the test didn't exceed an expected maximum duration and, if it did, you'd see it as a failed test.

This way you're not measuring the unpredictable performance of your testing infrastructure, you're actually testing the performance of your code.

Also, as Aravol suggested, you could front-load the cost of your test setup by populating your Moq's in a static constructor since static constructors are executed before anything is newed or instance methods executed.

Rich Turner
  • 10,800
  • 1
  • 51
  • 68
0

The problem you run into is that the unit tests don't allow for changing of the time, or output of data - they just execute and finish.

One way you can do this is to violate Unit Test standards and use a static reference and static constructor to prepare your backing data - while not technically guaranteed, VS 2013 does execute all Unit Tests in the same AppDomain (though via separate instances of the given TestClass)

Rich Turner
  • 10,800
  • 1
  • 51
  • 68
David
  • 10,458
  • 1
  • 28
  • 40
0

I don't believe that there are many instances in which a lengthy test init is actually required. It would be interesting to know what kind of operations were being included in the poster's example.

A bit of creativity is required to separate a TestInit's functions into ClassInit (someone earlier suggested use of a constructor... kind of the same thing, but errors in that block of code will report quite differently). For example, if every test need a List<> of strings that's read from a file, you split it this way:

1) ClassInit - read the file, capture the strings into an array (the slow part) 2) TestInit - copy the array's elements into a List<> accessible by each test (the fast part)

I'm against using statics to try to solve a test performance problem, it ruins each test's isolation from each other.

I'm also against tests using things like StopWatches to assert on their own performance... running tests generates a report, so watchers of that report should identify tests that run too long. Also, if we want automated tests to exercise the performance of something, that's not a unit test, that's a performance test, and it can (should?) be something entirely different.

Craig Brunetti
  • 555
  • 5
  • 6