0

My application has a business layer that heavily talks to a database. This layer is implemented as a bunch of ADO.NET calls wrapped with Dapper. To initialise the database schema, I use Entity Framework 6 Code First, which has a nice side-effect of being able to use my C# POCO both as DDL and DAO.

The raw SQL I use in Dapper is a mixture of plain old SQL, some user-defined functions, and things specific to SQL Server, e.g. the built-in functions and the full-text indices.

In my Jenkins CI build, I have an integration test suite that would test this layer using NUnit. To ensure tests are isolated from each other, the database initialisation should happen on the [SetUp] method.

I went through several options to implement the integration test:

  1. Use in-memory database. Pros: test setup is fast, running test logic is fast. Cons: this is a non-option, because the SQL commands that I use are specific to SQL Server.
  2. Use SQLite or SQL Compact. Pros: syntax is similar to SQL Server, setting up and tearing down is just a matter of deleting files. Cons: also non-option the moment I start making use of user-defined functions and full-text index.
  3. Use local SQL Express server just to serve the test. Pros: behaviour is almost guaranteed to match production setup, including built-in functions and full-text index. Cons: set up is slower and more complicated.

I went with option 3, with the following approach:

  1. On [SetUpFixture]'s [SetUp], initialise a seed database. Once initialised, create a backup of this database, save it as a file. This only happens once on the whole namespace of the tests.
  2. On each [TestFixture]'s [SetUp], restore the backup. Since this happens every time before a test is run, each test is isolated from the other tests.

The disadvantage is running the test suite now becomes very slow. Step no. 2 above takes 2-5 seconds for every test. On top of that, each test also takes some time to execute the test fixture itself, which is quite a bit slower that typical unit test because it hits real database.

In my current state, a moderately-sized test suite of 143 tests takes 13 minutes on my build machine, which is a VM on Azure, specced as A3 (4 cores 7 GB), running Windows Server 2008 R2.

Is there anyway that I can speed up this process other than pumping more hardware to the machine?

Mike Stockdale
  • 5,256
  • 3
  • 29
  • 33
Aditya Santoso
  • 1,031
  • 6
  • 19

1 Answers1

2

You could use a transaction scope in the setup and dispose this in the teardown method:

[MyFixture]
public class MyTests
{
    private string connString= "...";
    private TransactionScope ts= null;
    [SetUp]
    public void SetUp()
    {
        ts = new TransactionScope(TransactionScopeOption.Required);
    }
    [TearDown]
    public void TearDown()
    {
        ts.Dispose();
    }

    [Test]
    public void MyTest()
    {
        // do my tests etc.
        using (SqlConnection conn = new SqlConnection(connString))
        {
            using (SqlCommand cmd = new SqlCommand("exec MyProcecure"))
            {
                cmd.Connection = conn;
                conn.Open();
                int retval = cmd.ExecuteNonQuery();
                Assert.Greater(retval, 0);
            }
        }
    }
}

EDIT This will work even if the unit under test has it's own transaction scope.

See MSDN

Voting inside a nested scope

Although a nested scope can join the ambient transaction of the root scope, calling Complete in the nested scope has no affect on the root scope. Only if all the scopes from the root scope down to the last nested scope vote to commit the transaction, will the transaction be committed.

Community
  • 1
  • 1
Steve Ford
  • 7,433
  • 19
  • 40
  • What happens if the unit under test has its own transaction scope and does a commit to DB? Would the outer transaction scope rolls back what was committed by the inner scope? – Aditya Santoso Aug 26 '15 at 02:58
  • @AdityaSantoso yes a nested transaction will be rolled back. See update to answer. – Steve Ford Aug 26 '15 at 10:15
  • It seems like I cannot read uncommitted data using this approach. Do I need to change anything in the connection or transaction scope or in the SQL command itself? – Aditya Santoso Aug 28 '15 at 08:18
  • As long as you are using the same connection and haven't closed / committed / rolled back you should be able to read any uncommitted data. – Steve Ford Aug 28 '15 at 09:33
  • I think I know why my reads could not read the uncommitted writes. TransactionScope doesn't play well with async-await, and my whole repository layer and the test method itself are full of async-await. See http://stackoverflow.com/questions/13543254/get-transactionscope-to-work-with-async-await?lq=1 Once I turn my test method synchronous, it seems to have the intended effect. I will also see whether I can upgrade my application to .NET 4.5.1 to make use of `TransactionAsyncFlowOption`, because turning hundreds of tests that are implemented as async into non-async is going to be a bit tiring – Aditya Santoso Aug 28 '15 at 11:51
  • Another thing is that since my application runs against `Azure SQL`, I applied `SqlAzureExecutionStrategy` to my `DbContext`. Thing is, it cannot work together with `TransientScope`. To make it work with `TransientScope`, I had to suspend the execution strategy as mentioned in this blog: http://romiller.com/2013/08/19/ef6-suspendable-execution-strategy/ – Aditya Santoso Aug 28 '15 at 12:14