What is the recommended batch size for SqlBulkCopy
? I'm looking for a general formula I can use as a starting point for performance tuning.

- 187,153
- 97
- 222
- 204

- 68,373
- 70
- 259
- 447
6 Answers
I have an import utility sitting on the same physical server as my SQL Server instance. Using a custom IDataReader
, it parses flat files and inserts them into a database using SQLBulkCopy
. A typical file has about 6M qualified rows, averaging 5 columns of decimal and short text, about 30 bytes per row.
Given this scenario, I found a batch size of 5,000 to be the best compromise of speed and memory consumption. I started with 500 and experimented with larger. I found 5000 to be 2.5x faster, on average, than 500. Inserting the 6 million rows takes about 30 seconds with a batch size of 5,000 and about 80 seconds with batch size of 500.
10,000 was not measurably faster. Moving up to 50,000 improved the speed by a few percentage points but it's not worth the increased load on the server. Above 50,000 showed no improvements in speed.
This isn't a formula, but it's another data point for you to use.

- 12,916
- 5
- 38
- 54

- 1,388
- 1
- 9
- 5
-
5One thing to consider is if the table is empty and has indexes. In those cases you might want to upload everything in one batch as mentioned here: https://technet.microsoft.com/en-us/library/ms177445(v=sql.105).aspx "If you bulk import data into an empty table with indexes and you specify the batch size, the table becomes non-empty after the first batch. Starting with the second batch, the data is fully-logged. For empty indexed tables, consider performing the bulk import in a single batch." – Sal Jan 24 '17 at 20:33
-
SqlBulkCopy streams the data from the source (e.g. DataTable) to Sql so what "increased load on the server" does it have on a large batch size? (e.g. 50,000) – BornToCode Mar 12 '19 at 13:42
This is an issue I have also spent some time looking into. I am looking to optimize importing large CSV files (16+ GB, 65+ million records, and growing) into a SQL Server 2005 database using a C# console application (.Net 2.0). As Jeremy has already pointed out, you will need to do some fine-tuning for your particular circumstances, but I would recommend you have an initial batch size of 500, and test values both above and below this.
I got the recommendation to test values between 100 and 1000 for batch size from this MSDN forum post, and was skeptical. But when I tested for batch sizes between 100 and 10,000, I found that 500 was the optimal value for my application. The 500 value for SqlBulkCopy.BatchSize
is also recommended here.
To further optimize your SqlBulkCopy operation, check out this MSDN advice; I find that using SqlBulkCopyOptions.TableLock helps to reduce loading time.
-
I would reckon running the bulk copy command in the server itself would probably be faster. – Captain Kenpachi Nov 19 '14 at 13:23
As others have stated, it depends on your environment specifically the row volume and network latency.
Personally, I'd start with setting the BatchSize
property to 1000 rows and see how that performs. If it works, then I keep doubling the number of rows (e.g. to 2000, 4000, etc.) until I get a timeout.
Otherwise, if a timeout occurs at 1000, then I decrease the number of rows by half (e.g. 500) until it works.
In each case, I keep doubling (if successful) or halving (if failed) the difference between each of the last two attempted batch sizes until finding a sweet spot.
The other factor to consider is how long does it take to copy a single batch of rows. Timeouts will occur if the batch of rows being copied exceeds the BulkCopyTimeout
property which by default is 30 seconds. You might try doubling the BulkCopyTimeout
property to 60 seconds. This allows a longer period of time for a larger set of batch rows to be copied. For example, a batch of 50,000 rows might take around 40 seconds just exceeding the 30 seconds time limit so bumping it up to 60 seconds might help with the performance.

- 187,153
- 97
- 222
- 204
This all depends on your implementation.
What kind of speed can you expect on your network? Are you using it in Forms or ASP.Net? Do you need to alert the user of progress? What is the size of the total job?
In my experience running bulk copy without a batch size specified will cause timeout issues. I Like to start with something like 1000 records and do some adjustments from there.

- 6,580
- 2
- 25
- 33
-
Speed: Varies, WebForms: Yes, ASP.NET: Yes, Wide Tables: Yes, Narrow tables, Yes. Thousands of rows: yes. Millions of rows: yes. If you can think of a scenario, I'm probably doing it. – Jonathan Allen Apr 23 '09 at 00:18
-
1I have to stick by my previous answer then. I don't think there's a silver bullet. – Jeremy Apr 23 '09 at 00:29
The values @Alric suggest seem ultra-conservative to me.
I tried just now as part of the project I'm working on with batch sizes of 10,000, 100,000 and 1,000,000. My destination table is clustered columnstore bigint, int, int, int, date, decimal(32,15).
BatchSize
10,000 100,000 1,000,000
3:31:25 PM 3:28:43 PM 3:36:57 PM
3:35:24 PM 3:29:38 PM 3:37:44 PM
3.983333332 0.916666671 0.783333328 (minutes)
SSMS Activity Monitor on 5 second refresh was showing <4MB/sec write on this database. This feels low to me, but I don't really know how to calibrate my expectations accurately.

- 573
- 4
- 16
-
amonroejj what will happen if we do something like `new SqlBulkCopy(connection){ BatchSize = numberOfRows, // could be 1M or 10M DestinationTableName = "dbo.Contacts" };` – Akhil RJ Feb 15 '22 at 17:03
-