0

I've got a C# library that I've written a lot of unit tests for. The library is a data access layer library and requires SQL Server's full-text indexing capabilities, which means LocalDb will not work. The unit tests are connecting to a local SQL Server instance. The project has an IDatabaseInitializer that drops and re-creates the database for each test, so each test has a fresh set of data to assume it can work against, meaning each test is capable of running on its own - no ordering needed.

A problem that I've had since day one on this, but never tackled yet, is that if I simply run all of the tests at once, some will fail. If I go back and run those tests individually then they succeed. It's not always the same tests that fail when I run all at once.

I've assumed that this is because they're all running so quickly and against the same database. Perhaps the database is not properly deleting and re-creating before the next test runs. On top of just a simple database, I've also got a SQL Server full-text index that some of my tests require in order to pass. The full-text index populates in the background and therefore is not "ready" immediately after populating the test database's tables. The full-text index may take a second or two to build. This causes my tests that target the full-text index to always fail, unless I step through the initializer in a debugger and pause a few seconds while the full-text index builds.

My question is, is there any way to avoid this clashing of database rebuilds? As a bonus, is there a way to "slow down" the tests that need the full-text index, so that it has time to populate?

Ryan
  • 867
  • 9
  • 23
  • 1
    If your unit tests are connecting to a database, they are not unit tests. - Mock out your database and run the code against in memory values... that way you wont need to wait for the database at all. – BenjaminPaul Feb 13 '15 at 16:35
  • "When is a test not a test" - http://stackoverflow.com/questions/1257560/when-is-a-test-not-a-unit-test – BenjaminPaul Feb 13 '15 at 16:47
  • That seems like such a bad idea, though. These tests run code that runs SQL and stuff. If we mock out the database, isn't that making the assumption that the SQL is correct? I feel like mocking out the db creates a lot of assumptions that the tests are directly trying to remove? I'm not saying you're wrong, I'm considering your suggestion. It just seems less useful to mock away the db. – Ryan Feb 13 '15 at 16:47
  • A bad idea? Really? You should be testing your code in isolation. What if your db connection goes down? someone changes the values in the database? There could be countless ways that your test could now fail that is not related to the code itself. Bad tests.... not only that but you also have the issue you are experiencing now... speed. You are already detecting a design smell! If I were you I would be looking at ways to isolate the data access later away from the code... probably behind a repository layer. – BenjaminPaul Feb 13 '15 at 16:52
  • So I guess answering questions like "does my database's full-text index work with my library as intended" are not the job of a unit test. Maybe more so an integration test? – Ryan Feb 13 '15 at 16:57

1 Answers1

-1

You can use a database with fixed datas. For your tests you use a transaction that you begin on TestInitialize and rollback on TestCleanup

binard
  • 1,726
  • 1
  • 17
  • 27