1

Using MSTest I want to run a test like this ...

var range = Enumerable.Range(0, 9);
foreach(var i in range)
{
   Test(i);
}

... one theory I had was to create a new Test attribute like this ...

[TestClass]
public class CubeTests
{

    [TestMethod]
    [TestRange(0, 9)]
    public void Test(int i)
    {
        // Test stuff
    }
}

...

The key here is that I have some quite memory intensive code that I would like MSTest to clean up between tests for me.

For something so simple I really don't want to be relying on files and using Datasource and Deployment items.

Can this be done, if so, is anyone prepared to offer up an idea of how?

War
  • 8,539
  • 4
  • 46
  • 98
  • Its not ... I only want a single param which would be i in the following loop declaration for(int i = 0; i< max; i++) { ... } passing a single parameter is half of the question, calling the method many times based on a range is the other half – War Mar 27 '16 at 20:01

3 Answers3

1

Implement test in separate method and call method from Test method

I think you can do this in NUnit, but I am pretty sure you can't do it in MS test.

If you want to do clean up then you can call the GC after every call, or create a TestCleanUpImpl method ( did this in snippet calling GC.Collect() to show how to force GC ).

Would suggest something like the following:

public void TestSetup()
{
    //Setup tests
}    

public void TestCleanUpImpl()
{
   //unassign variables
   //dispose disposable object
   GC.Collect();
}

public void TestImpl(int i) 
{
    // Test stuff
    // Do assert statements here 
}

[TestMethod]
public void Test()
{
    int fromNum = 0;
    int untilNum = 9;

    for(int i=fromNum;i<=untilNum;i++)
    {
        TestSetup();
        TestImpl(i);
        TestCleanUpImpl();
    }
}

If you have complicated setup and clean up could possibly implement a class that handles disposing and creating, handle setup in constructor, disposal in Dispose method

I wouldn't use this as my first choice, prefer to keep my tests as simple as possible, even if my tests do violate DRY it makes them much easier to follow, which means less debugging, which is a good trade off in my opinion

public class TestImplObj : IDisposable
{
    public TestImplObj()
    {
         //Setup test
    }

    public void TestImpl(int i)
    {
        //Do the actual test
    }

    public void Dispose()
    {
        Dispose(true);
        GC.SuppressFinalize(this);
    }

    protected virtual void Dispose(bool disposing)
    {
        if (disposing)
        {
            // Do the clean up here
        }
    }
}
konkked
  • 3,161
  • 14
  • 19
  • This would break after the first test as any setup steps would need to be repeated before each step, this also suffers the problem I talked about in my comment to Pedro's answer. – War Mar 27 '16 at 09:43
  • Added a clean up method, if it is something more complicated please let me know what type of resources you are dealing with and could possibly provide a solution – konkked Mar 27 '16 at 18:15
  • My scenario is a little too complex to explain here, this may work but isn't quite what I wanted, for example: consider the test output, this would show that 1 test either passed or failed. I need to know which scenario passed or failed out of the range of scenarios I have. This is essentially like saying "don't write lots of unit tests just write 1" ... for obvious reasons you can see my hesitation. That said, for others this may well work. I probably need to ask a more targeted question but it would likely get closed as "too specific" or something – War Mar 27 '16 at 20:14
1

You dont have to resort to built-in test runner magic. Simply add your range as a property of your test class:

private static IEnumerable<int> TestRange
{
    get
    {
        int i = 0;
        while(i < 10)
            yield return i++;
    }
}

now in your testmethod, you can do the for-loop as usual, using your uniquely defined testrange:

[TestMethod]
public void DoStuff_RangeIsValid_NoExceptions(){

  // Act
  foreach(var i in TestRange){
    // do the unit test here
  }
}
Pedro G. Dias
  • 3,162
  • 1
  • 18
  • 30
  • This doesn't cause the test framework to run cleanup and setup from scratch between tests though. I could do these things manually I suppose but that sort of defeats the purpose of having the framework there in the first place. If this is truely the best solution to the problem I'll just build a console app and have that spit out a log. – War Mar 27 '16 at 09:42
  • I wouldn't overcomplicate things. You can skip the yield return if you want a clean IEnumerable between tests. Often, in larger projects, I will create a set of approved test values in a separate static class and have a one-stop shop for maintaining what is valid testdata. It is cleaner than to try to AOP the way in. – Pedro G. Dias Mar 27 '16 at 16:21
1

Maybe that's what you're looking for. Some years ago, Microsoft made ​​available an extension for visual studio called PEX.

PEX generate unit tests from a single parametric test, Pex finds interesting input-output values of your methods, which you can save as a small test suite with high code coverage.

You can use assumption and precondition for the parameters of your test, which ensure the best control of the tests generation.

Pex is no longer available(it was a research project), but is now available instead Intellitest, which still uses the same static analysis engine.

Intellitest generates a parameterized test that is modifiable and general/global assertions can be added there. It also generates the minimum number of inputs that maximize the code coverage; stores the inputs as individual unit tests, each one calling the parameterized test with a crafted input.

[PexMethod]
public void Test(int i)
    {
       PexAssume.IsTrue(i >= 0); 
       PexAssume.IsTrue(i < 10);
       // Test stuff
    }
ale
  • 10,012
  • 5
  • 40
  • 49
  • 1
    This is exactly what I need, but no longer available is worrying. I wondered where PEX went. The only thing that worries me is that I have rather a lot of test cases so intellitest will need to run as a separate test run rather than on the fly ... Well spotted!! time for some digging, if I didn't have 134 million scenarios this would be bang on! – War Mar 28 '16 at 12:15