7

I have a test suite that runs against a database in a SQL Server 2012 cluster. I want this test suite to run as fast as possible. I'm willing to sacrifice every durability and availability guarantee for performance. The database is recreated during every test run, so it doesn't even have to survive server restarts.

Changing the recovery model with ALTER DATABASE [dbname] SET RECOVERY SIMPLE makes no noticeable difference.

A good option seems DELAYED_DURABILITY, but this is new in 2014 and therefore unavailable to me.

What can I do to make a crazy fast database on this cluster? I tried looking for in-memory databases but couldn't find any options. The cluster won't allow me to create a database on a local disk, insisting that it must be located on a clustered disk.

Update: The application uses advanced SQL Server features, so I'm most likely stuck with MS SQL Server. The database itself is quite small because it's for testing (8MB mdf, 1MB ldf). The cluster nodes are the fastest servers in the network, so if I can misuse one of these nodes for an in-memory database that would certainly be fastest. But how?

molf
  • 73,644
  • 13
  • 135
  • 118
  • Install SQL Server Express on a separate server ? – Max Sep 20 '14 at 12:35
  • I used LocalDB, which is SQL Server Express, I believe. Because the server on which the tests run is slower, the test suite actually takes (marginally) more time. – molf Sep 20 '14 at 13:28
  • 1
    SQL Server 2014 is the only version of SQL Server that allows delayed and non-durability. But if you don't care about durability and availability, why are you using clustering? – Dan Guzman Sep 20 '14 at 13:37
  • I use the cluster because it's available and because the nodes are fast. It is primarily used for other (non-test) purposes which do require clustering. – molf Sep 20 '14 at 13:41
  • 2
    I believe you can create a new non-clustered named instance of SQL Server on a Windows failover cluster node. That will allow you to use local disk instead of slower shared storage. The fastest local storage would be PCI-based SSD if that is an option. – Dan Guzman Sep 20 '14 at 14:16
  • Depending on how it's configured the cluster itself shouldn't create much overhead. When you say "fastest possible database", can you be more specific? Are you looking for read or write performance? Storing your DB files properly (data, log, backups) on separate drives helps. Drive configuration itself helps (RAID 1 vs RAID5). Indexing helps. Query tuning helps. Adding more memory to the server helps. Lots of things can be done, but it depends on what your current issue is. In the end, the biggest issue you may face is the amount of CPU and memory being consumed by other DB on the same server. – Brian Shamblen Sep 22 '14 at 23:38
  • On your localDB you recreate all tables,views,etc every time ? Maybe you can create a localDB template. – Max Sep 23 '14 at 14:06
  • Depending on your requirements you might want to look at an in-memory ColumnStore: [link](http://msdn.microsoft.com/en-us/library/gg492088.aspx) – David Godwin Sep 23 '14 at 15:32
  • What is physical size [roughly] of the database file? I mean the .mdf file and its size variance[min-max] roughly? – MHOOS Sep 25 '14 at 16:48
  • Fail over clustering and Fast are not words paired together in the same sentence. Clustering means you are willing to put overhead in place for High Availability. The clustering service and shared resources make things slower. Dan suggests a Solid State Drive. While this is faster, it is not as fast as memory. Have you looked at Hekaton for OLTP or Column Store for OLAP workloads if you are stuck with the MS stack. Fastest solution might be a in memory database. http://www.pcworld.com/article/2111120/inmemory-technologies-move-databases-to-real-time.html – CRAFTY DBA Sep 25 '14 at 17:27
  • Never worked in a clustered environment, but would creating the mdf and ldf on ram disks help? – Akash Sep 27 '14 at 13:58
  • What criteria applied when looking for an in-memory database? i.e. does this have to be SQL Server – Paul Maxwell Sep 28 '14 at 02:51
  • @molf : have you enabled Read Committed Snapshot Isolation? And assuming that your tests do any amount of DML statements, why are the DBs so small? Is that the size they are created at? If so, are they the same size when the tests are done? I would imagine that sizing them to be above their max size such that they never grow (.mdf or .ldf) would be ideal as the system will waste time allocating space for them with each grow operation (especially if you are using the default 10% growth rate on the log file). – Solomon Rutzky Sep 28 '14 at 06:12
  • @molf Could you please come up with some sort of feedback on attach-detach strategy? – MHOOS Sep 30 '14 at 09:38

4 Answers4

3

If for some reason you are stuck on a clustered sql server instance but you do not want durability, maybe you could run your application on tempdb. Tempdb can be placed on local storage to avoid cluster overhead.

Also note that data stored on tempdb will initially stay on the buffer pool, which is RAM memory, and only spill to disk asynchronously as the sql server engine finds a better use for that memory space.

You can implement this solution, by scripting all your database objects and using a text editor to replace the name of your database with 'tempdb'. Then execute this script to create all objects on tempdb. Also set the initial catalog on the user running the application to tempdb and/or edit the required connection strings. Keep in mind that tempdb is regenerated every time the instance is restarted. So you would loose all data and ddl changes.

That would certanly be a best effort to "sacrifice every durability and availability guarantee".

carloscolombo
  • 395
  • 1
  • 6
2

Could something like this work (doc)?

CREATE DATABASE Sales
ON 
( NAME = Sales_dat,
    FILENAME = 'R:\saledat.mdf',
    SIZE = 10,
    MAXSIZE = 50,
    FILEGROWTH = 5 )

Where R: is a RAM Drive

Community
  • 1
  • 1
Jason Sperske
  • 29,816
  • 8
  • 73
  • 124
1

If you want to create a database on a local disk you can bypass the cluster restrictions by creating it on a share.

You have to create a shared folder which placed on a local disk for the cluster; Then create your database with UNC patch (ex: \\share\DATA\data.mdf); There shouldn't be restrictions for this in 2012, in 2008 you had to use Trace flag 1807;

1

Use a technique called Continuous Attach-Detach to make fast nearly-on-the-fly databases:

I bet you know how to detach the database but as a reminder run something like the following:

EXEC sp_detach_db

At this stage SQL server assumes you have dropped it. Now attach to an already existing file using something like the following: EXEC sp_attach_db @dbname = N’myD’,@filename1=N’myCurrentPath’,@filename2=N’pathToNewFile’

OK. That was easy but how are these new files which I am attaching my database are created? Through a very small and extremely simple .NET CSharp console application which has read the contents of your .mdf and .ldf into memory once and writes it to your choice of file:

Read and Write to a Newly Created Data File

How is the entire situation triggered? Attach a database trigger using DDL triggers as described in

Attach Database trigger

Key point is attaching to an already existing file which has been read(once)-write[many time] into memory [Using your program of choice,CSharp for instance] so you can attach your database without extra overhead.

Community
  • 1
  • 1
MHOOS
  • 5,146
  • 11
  • 39
  • 74