I have a test suite that runs against a database in a SQL Server 2012 cluster. I want this test suite to run as fast as possible. I'm willing to sacrifice every durability and availability guarantee for performance. The database is recreated during every test run, so it doesn't even have to survive server restarts.
Changing the recovery model with ALTER DATABASE [dbname] SET RECOVERY SIMPLE
makes no noticeable difference.
A good option seems DELAYED_DURABILITY
, but this is new in 2014 and therefore unavailable to me.
What can I do to make a crazy fast database on this cluster? I tried looking for in-memory databases but couldn't find any options. The cluster won't allow me to create a database on a local disk, insisting that it must be located on a clustered disk.
Update: The application uses advanced SQL Server features, so I'm most likely stuck with MS SQL Server. The database itself is quite small because it's for testing (8MB mdf, 1MB ldf). The cluster nodes are the fastest servers in the network, so if I can misuse one of these nodes for an in-memory database that would certainly be fastest. But how?