I'm using Dapper to insert data from realtime feed into Sql Server, so I care about performance. Recently, I noticed something strange.
Out of the box, if you give Dapper a collection and an insert query, it fires insert statements for each element. My tests show I can insert about roughly 1800 objects with 12 fields in 1 second this way (counting only connection.Execute(...)
running time.
Now, I didn't find batch insert functionality in Dapper and implemented my own (constructing parameter list and sql query). After that, I found out I can only insert one batch in about 3 seconds (which is limited to 1000 rows) (again, only counting connection.Execute(...)
calls.
So, that makes my batching almost 6 times slower than sending each row in a separate query. Can someone explain that to me? I thought people use batch operations to speed up the process.
I would like insert time to be 1 second at the most.
I use Sql Server 2012 Standard which is on a local network. The table I'm inserting to only has clustered index on primary key (which is bigint field), no non-clustered indexes and triggers.
I can post the code, but there's really nothing special

- 8,328
- 17
- 68
- 113
-
What is the problem here? Is 1 second for insert too long? Are you not happy with one by one inserts? In both cases, why? There are benefits to both one by one and bulk inserting depending on the environment and usage. Also, is this a one off or recurring and do you absolutely *have* to use Dapper? – iamdave Jan 16 '17 at 11:10
-
@iamdave the problem is I need to insert more data which is coming off a realtime feed. Right now I use Dapper and it's running at about 30% of volume I'm planning for production usage. No, it doesn't have to be Dapper, I didn't try SqlBulkCopy yet – chester89 Jan 16 '17 at 11:12
-
1It might be worth having a look at your custom batch insert implementation. – grek40 Jan 16 '17 at 11:16
-
Is there any overhead at the SQL Server end? Are you inserting into tables with Indexes and Triggers? SQL can be swamped when batches get too large. – pacreely Jan 16 '17 at 11:18
-
@pacreely no, the only thing that table has is clustered index on primary key. I'll add that to the question – chester89 Jan 16 '17 at 11:20
-
@pacreely it's bigint. may be I hit the fact that ever-increasing clustered key doesn't scale? http://www.sqlpassion.at/archive/2014/04/15/an-ever-increasing-clustered-key-value-doesnt-scale/ – chester89 Jan 16 '17 at 11:23
-
How big is the destination table and how much data are you inserting, say, every minute? – iamdave Jan 16 '17 at 12:05
-
@iamdave destination table is about 3.3 million rows, I insert roughly 1000-1500 rows per minute – chester89 Jan 16 '17 at 12:13
-
Post your code for your batch insert and the table schema. – SqlZim Jan 16 '17 at 13:59
-
Have you tried the answers here: http://stackoverflow.com/questions/10689779/bulk-inserts-taking-longer-than-expected-using-dapper ? – SqlZim Jan 16 '17 at 14:01
-
@SqlZim those answers insert one row at a time. – chester89 Jan 16 '17 at 14:22
-
SqlBulkCopy is great, but that's for System.Data.SqlClient. Dapper is data provider independent. There is no drop-in replacement for, say, Postgres with Npgsql data provider. Having a transaction on a connection helps, if you want to stay with Dapper. – Lucius Mar 27 '19 at 15:10
-
@Lucius sure. The question was about SQL Server, so I think it's ok. Npgsql has COPY – chester89 Mar 28 '19 at 11:16
2 Answers
I'm not sure why you are using Dapper Execute extension method if you want the best performance available.
The best free way to insert with the best performance is using the SqlBulkCopy
class directly.
Disclaimer: I'm the owner of the project Dapper Plus
This project provides easy support for the following operations:
- BulkInsert
- BulkUpdate
- BulkDelete
- BulkMerge
Example:
// CONFIGURE & MAP entity
DapperPlusManager.Entity<Order>()
.Table("Orders")
.Identity(x => x.ID);
// CHAIN & SAVE entity
connection.BulkInsert(orders)
.AlsoInsert(order => order.Items);
.Include(x => x.ThenMerge(order => order.Invoice)
.AlsoMerge(invoice => invoice.Items))
.AlsoMerge(x => x.ShippingAddress);

- 10,874
- 2
- 38
- 60
The ideal batch size can vary from Server to Server, the key factor is Logging, this will affect how resource intensive the Insert is and how large you can make your batches before you see a performance drop-off.
The key to fast Inserts/Updates is ensuring you meet the requirements for Minimal Logging, please look at this Microsoft White Paper. Don't panic - you don't need to read all of it - look at the table describing the conditions to meet 'minimal logging' (you will need to talk to your DBA).
Once you've got as little Logging as possible, then you need to look at your execution plans for SPOOLS, if a batch starts to spool to Tempdb then you will hit a dramatic slow down. The key is to keep the batch small enough to remain in the buffer (RAM). But the amount of buffer space available will depend on other processes.
Note: TABLOCK is not the same as TABLOCKX.

- 1,881
- 2
- 10
- 16
-
will look into it. honestly, I already employed SqlBulkCopy and it's waaay faster than all the alternatives. I can insert 1500 rows in less than 100ms. it's more than enough for me – chester89 Jan 17 '17 at 19:44