0

How do I set the database before all the test start running? How do I clean the database after all the tests have run? The tests are running in parallel so I can't relay on setting at the beginning, or on cleaning at the ending, of each test.

3 Answers3

1

In case you have a bash script (or some tools from your CI) to execute your tests, you can define the setup for the database before executing the unit tests. After execution you can do the same for the clean up.

Just want to give you some other solution to solve your problem.

David
  • 505
  • 3
  • 8
0

If I understand you correctly you want to have a single database setup for all tests in all test classes. Please try to use Collection Fixture. You have a simple example here: https://xunit.github.io/docs/shared-context.html

Also there are some examples how to use it with db.

majewka
  • 111
  • 1
  • 10
  • the database doesn't clean between tests.I want the test to run one after the other and that their database will clean up after one test has finish and before the other one has started – Avishag Saban Oct 31 '18 at 16:34
  • Sorry for delay. In the xunit documention you can find "xUnit.net creates a new instance of the test class for every test that is run, so any code which is placed into the constructor of the test class will be run for every single test" so just place database initialization in constructor. – majewka Nov 06 '18 at 12:15
0

How do I set the database before all the test start running? How do I clean the database after all the tests have run?

Answer to the question asked is here. You'll need to implement fixture for initialization and cleanup and mark all the tests with attribute to put to the same test collection. See Shared context docs for details.

If it'd be NUnit, you could have used more intuitive in my opinion OneTimeSetup and OneTimeTeardown attributes.

But I'd like to add, that it's not possible to use the only database for integration tests running in parallel, if these tests do mutate the data. This way you'll introduce inter-test dependencies and end up with flaky tests. E.g one test asserts on the data it just created, but the other running in parallel is fast enough to delete it due to the race condition for the shared state.

If you really need parallel test runs, you'll need to prepare a pool of databases (static or created dynamically on demand with use of sth like docker and TestContainers) and acquire a db per test collection.

There are other ways to optimize test runs, while having sequential tests, mostly related to the data insertion/cleanup:

  • Separate tests to Read and Write in regard to the data mutations and run the former ones in parallel and without any cleanup. (E.g HTTP GET requests are safe to be run in parallel if the system under test is a webapp);
  • Use the only insert/delete script for all the data and optimize them (checkout the Reseed library I'm developing for both inserttion and deletion or Respawn for deletion);
  • Use database snapshots for the restore, which might be faster than full insert/delete cycle;
  • Wrap each test in transaction and revert it afterwards;
Uladzislaŭ
  • 1,680
  • 10
  • 13