8

Update: Here is my solution

I have a table defined as:

CREATE TABLE [dbo].[csvrf_References]
(
    [Ident] [int] IDENTITY(1,1) NOT NULL,
    [ReferenceID] [uniqueidentifier] NOT NULL DEFAULT (newsequentialid()),
    [Type] [nvarchar](255) NOT NULL,
    [Location] [nvarchar](1000) NULL,
    [Description] [nvarchar](2000) NULL,
    [CreatedOn] [datetime] NOT NULL DEFAULT (getdate()),
    [LastUpdatedOn] [datetime] NOT NULL DEFAULT (getdate()),
    [LastUpdatedUser] [nvarchar](100) NOT NULL DEFAULT (suser_sname()),

    CONSTRAINT [PK_References] PRIMARY KEY NONCLUSTERED ([ReferenceID] ASC)
) ON [PRIMARY]

I have a DataTable with columns that match the table column names and data types. The DataTable is filled out with DBNull.Value in CreatedOn, LastUpdatedOn and LastUpdatedUser. ReferenceID is already generated. When I call the following code I get the error below.

Code:

SqlBulkCopy bulkCopy = new SqlBulkCopy(conn, SqlBulkCopyOptions.Default, bulkCopyTran);
bulkCopy.DestinationTableName = table.TableName;
bulkCopy.ColumnMappings.Clear();
foreach (DataColumn col in table.Columns) bulkCopy.ColumnMappings.Add(col.ColumnName, col.ColumnName);
bulkCopy.WriteToServer(table);

Error:

Error trying to BulkCopy table csvrf_References
System.InvalidOperationException: Column 'CreatedOn' does not allow DBNull.Value.
at System.Data.SqlClient.SqlBulkCopy.ConvertValue(Object value, _SqlMetaData metadata, Boolean isNull, Boolean& isSqlType, Boolean& coercedToDataFeed)

I have looked all over and I can't seem to find an answer for this. The SqlBulkCopy class seems not to honor default values even though it says it does. What am I doing wrong here?

Community
  • 1
  • 1
James Nix
  • 885
  • 1
  • 6
  • 19

3 Answers3

17

For part 1, "field that is NOT NULL with a DEFAULT", you should not be sending the field in the first place. It should not be mapped. There is no need to change that field to accept NULLs just for this.

For part 2, "field that is NULL with a DEFAULT", that will work to get the default value when passing in DbNull.Value, as long as you don't have the SqlBulkCopyOptions set to KeepNulls, else it will insert an actual database NULL.

Since there is some confusion about the SqlBulkCopyOption of KeepNulls, let's look at its definition:

Preserve null values in the destination table regardless of the settings for default values. When not specified, null values are replaced by default values where applicable.

This means that a DataColumn set to DbNull.Value will be inserted as a database NULL, even if the column has a DEFAULT CONSTRAINT, if the KeepNulls option is specified. It is not specified in your code. Which leads to the second part that says DbNull.Value values are replaced by "default values" where applicable. Here "applicable" means that the column has a DEFAULT CONSTRAINT defined on it. Hence, when a DEFAULT CONSTRAINT exists, a non-DbNull.Value value will be sent in as is while DbNull.Value should translate to the SQL keyword DEFAULT. This keyword is interpreted in an INSERT statement as taking the value of the DEFAULT constraint. Of course, it is also possible that SqlBulkCopy, if issuing individual INSERT statements, could simply leave that field out of the column list if set to NULL for that row, which would pick up the default value. In either case, the end result is that it works as you expected. And my testing shows that it does indeed work in this manner.

To be clear about the distinction:

  • If a field in the database is set to NOT NULL and has a DEFAULT CONSTRAINT defined on it, your options are:

    • Pass in the field (i.e. it will not pick up the DEFAULT value), in which case it can never be set to DbNull.Value

    • Do not pass in the field at all (i.e. it will pick up the DEFAULT value), which can be accomplished by either:

      • Do not have it in the DataTable or query or DataReader or whatever is being sent in as the source, in which case you might not need to specify the ColumnMappings collection at all

      • If the field is in the source, then you must specify the ColumnMappings collection so that you can leave that field out of the mappings.

    • Setting, or not setting, KeepNulls does not change the above noted behavior.

  • If a field in the database is set to NULL and has a DEFAULT CONSTRAINT defined on it, your options are:

    • Do not pass in the field at all (i.e. it will pick up the DEFAULT value), which can be accomplished by either:

      • Do not have it in the DataTable or query or DataReader or whatever is being sent in as the source, in which case you might not need to specify the ColumnMappings collection at all

      • If the field is in the source, then you must specify the ColumnMappings collection so that you can leave that field out of the mappings.

    • Pass in the field set to a value that is not DbNull.Value, in which case it will be set to this value and not pick up the DEFAULT value

    • Pass in the field as DbNull.Value, in which case the effect is determined by whether or not SqlBulkCopyOptions is being passed in and has been set to KeepNulls:

      • KeepNulls is not set will pick up the DEFAULT value

      • KeepNulls is set will leave the field set to NULL


Here is a simple test to see how the DEFAULT keyword works:

--DROP TABLE ##DefaultTest;
CREATE TABLE ##DefaultTest
(
  Col1 INT,
  [CreatedOn] [datetime] NOT NULL DEFAULT (GETDATE()),
  [LastUpdatedOn] [datetime] NULL DEFAULT (GETDATE())
);
INSERT INTO ##DefaultTest (Col1, CreatedOn) VALUES (1, DEFAULT);
INSERT INTO ##DefaultTest (Col1, LastUpdatedOn) VALUES (2, DEFAULT);
INSERT INTO ##DefaultTest (Col1, LastUpdatedOn) VALUES (3, NULL);
INSERT INTO ##DefaultTest (Col1, LastUpdatedOn) VALUES (4, '3333-11-22');

SELECT * FROM ##DefaultTest ORDER BY Col1 ASC;

Results:

Col1   CreatedOn                  LastUpdatedOn
1      2014-11-20 12:34:31.610    2014-11-20 12:34:31.610
2      2014-11-20 12:34:31.610    2014-11-20 12:34:31.610
3      2014-11-20 12:34:31.610    NULL
4      2014-11-20 12:34:31.613    3333-11-22 00:00:00.000
Solomon Rutzky
  • 46,688
  • 9
  • 128
  • 171
  • Are you saying you were able to get this to work using SqlBulkCopy or that it should work because `DBNull.Value` should translate to `Default` when there is a default constraint on the field in question? For me, using .NET 4.5 and SQL Server 2012 I was unable to get SqlBulkCopy to work passing DBNull.Value to a non-nullable field with a default constraint. – Tim Lentine Nov 20 '14 at 17:56
  • @TimLentine : hey there. To clarify, I am saying that a) I did get this to work via `SqlBulkCopy`, and b) that you _cannot_ specify a field that is marked `NOT NULL` at all if you want to pick up the default value (which I said in Part 1). Part 2 where I mention translating to `DEFAULT` is regarding NULLable fields, which covers what James said in the comment on your answer regarding "other tables where I will have columns that contain data as well as nulls". But yes, it is confusing that `SqlBulkCopy` handles `DbNull.value` differently based on the field being declared `NULL` vs `NOT NULL`. – Solomon Rutzky Nov 20 '14 at 18:04
  • @TimLentine : I just updated my answer with a bullet list that hopefully maps out the various scenarios more clearly. – Solomon Rutzky Nov 20 '14 at 18:25
  • Thanks for the list, that helps clarify your points.(I was thinking that I was missing some way of making it work as the OP described in his question, but that is not the case.) The behavior as is relates to DBNull.Value is quirky indeed. – Tim Lentine Nov 20 '14 at 18:57
  • If only the MS docs would state this outright! Thank you for the explanation. – James Nix Nov 21 '14 at 14:33
  • @JamesNix Yes, it is definitely confusing due to being inconsistent behavior. `DbNull.Value` is just fine to make use of a DEFAULT on a NULLable field so why not also on a NOT NULL field? The only reason I can think of is that there is a validation step that happens first which compares the data to the schema but is unaware of the BulkCopyOptions value. I suspect that this behavior isn't intentional and is either unknown by the developers (which sounds like a bug), or known but either "not worth fixin" or "haven't fixed yet". Either way, :( . – Solomon Rutzky Nov 21 '14 at 15:52
  • First, thank you for the detailed explanation. One clarification though. If a field in the db is set to NOT NULL and has a DEFAULT CONSTRAINT defined, and you don't pass in the field at all (so it picks up the DEFAULT value), and you do not have it in the source (i.e. the 4th bullet point), you said "you might not need to specify the ColumnMappings collection." I got odd behavior with this; I got non-nulls, but not the ones set by the DEFAULT constraint. To get the expected values I had to have the columns in the source, but leave them out of the mapping (i.e. the 5th bullet point). – rrreee Jan 09 '17 at 20:14
  • @rrreee Yer welcome, and thanks for that info. Your `NOT NULL` column with the Default Constraint: what is its ordinal position in the table? Is it the last / right-most column, or in the middle? I am guessing it is in the middle? If I am correct, I would guess that it fills in columns from the lowest ordinal position (i.e. left-most) as reported in `sys.columns`. In this case, it would probably work if the NOT NULL with Default column was at the end (i.e. highest ordinal position), which I suspect is what I meant by "_might_ not need to specify the mappings". If you can confirm, I can update. – Solomon Rutzky Jan 09 '17 at 20:33
  • I have two `NOT NULL` columns with default constraints and they are the second to last and last columns. I'd left them out of my DataTable, but the behavior was weird; one of the columns is set to `DEFAULT (@@SPID)` but when I bulk copied in 1000 rows, I ended up with values 0 to 999 in the column. At first, I preferred not specifying the columns in the DataTable, but my preference has changed. Now I want to be explicit. When I tried with only the last column left out, and with no mapping, it works as expected, so your update would be right, if I remove the oddness of what I'm doing. – rrreee Jan 09 '17 at 21:07
  • @rrreee What is the default for the other NOT NULL column? And which one is last and which 2nd-to-last? was the behavior equally odd for that other NOT NULL column? for the column with `DEFAULT (@@SPID)`, were those 0 to 999 values the same for all rows in a particular execution of `WriteToServer()`, or would the values be different per row in a particular execution? – Solomon Rutzky Jan 09 '17 at 21:32
  • The default for the other column is the result of a function, and (via a bit more complexity) gets me the login_time from sys.dm_exec_sessions. The table definition ends with `session_id SMALLINT NOT NULL DEFAULT (@@spid), session_login_time DATETIME NOT NULL DEFAULT (dbo.GetSessionLoginTime())`. The behavior for the other column was not as odd: I got the same value in all rows (which I need), but I'm not sure if it was the right value. One invocation of `WriteToServer()` with 1000 rows results in the values 0 through 999 for session_id. – rrreee Jan 10 '17 at 15:41
  • @rrreee Interesting, especially how the `session_id` column will have varying values. Have you tried specifying the `ColumnMapping` prior to adding the two columns to the source `DataTable`? Also, do you have the ability / time to run SQL Profiler while executing this in the original manner that should produce the "odd" values? I am curious what the actual `INSERT` statement looks like. – Solomon Rutzky Jan 10 '17 at 16:04
  • The `ColumnMapping` is part of the SqlBulkCopy, not the DataTable, so is there a way to do the mapping first? I won't have time over the next few days to run profiler, though I am quite curious too. I'll try to get back to it though, as others will likely be interested too (see part 2 of this comment). – rrreee Jan 10 '17 at 16:23
  • Part 2: What I'm actually doing is working with Always Encrypted, trying to support stored procs that had inserted data provided via TVP or temp table, but which must now use SqlBulkCopy, as that's the only way supported to insert encrypted data. The closest match is to SqlBulkCopy to a memory optimized table which the proc then accesses. A security policy filters on these two columns, so it acts like a temp table. My solution is the only one there is right now, so Microsoft may ask me the same questions. – rrreee Jan 10 '17 at 16:25
3

“SQLBulkCopy column does not allow DbNull.value” error is due to source and destination table has different column order.

Anasuddeen
  • 41
  • 2
  • I got this error, when transfer data from one database table to another. As @James Nix summarizes, the root cause is how we set the _default_ value property of column. Thanks. – Anasuddeen Sep 06 '16 at 09:57
  • This is the correct answer! I was searching for issues in mysql integer column cast and then my code and possible null values etc. But this was the silly thing! Thanks – sanpat Sep 10 '20 at 18:56
0

Reading the documentation regarding SqlBulkCopy, particularly SqlBulkCopyOptions, I would draw the same conclusion that you did: SQL Server should be "smart" enough to use the default constraint where applicable, especially since you are not using the SqlBulkCopyOptions.KeepNulls attribute.

However, in this case I suspect the documentation is subtly incorrect; if not incorrect it is certainly misleading.

As you have observed, with a non-nullable field with a default constraint (in this case GetDate()) the SqlBulkCopy fails with the aforementioned error.

As a test, try creating a second table that mimics the first, but this time make the CreatedOn and LastUpdatedOn fields nullable. In my tests, using the default options (SqlBulkCopyOptions.Default) the process works without error and CreatedOn and LastUpdatedOn both have the correct DateTime value populated in the table despite the fact that the DataTable's values for those fields were DBNull.Value.

As yet another test, using the same (nullable fields) table, perform the SqlBulkCopy only this time use the SqlBulkCopyOptions.KeepNulls attribute. I suspect you will see the same results I did, that is, CreatedOn and LastUpdatedOn are both null in the table.

This behavior is similar to executing a "vanilla" T-SQL statement to insert data into the table.

Using the original table (non-nullable fields) as an example, if you execute

INSERT INTO csvrf_References ([Type], [Location], [Description], [CreatedOn], [LastUpdatedOn], [LastUpdatedUser]) 
VALUES ('test', 'test', 'test', null, null, null)

you will receive a similar error regarding null values not being allowed in the table.

However, if you omit the non-nullable fields from the statement SQL Server uses the Default Constraints for those fields:

INSERT INTO csvrf_References ([Type], [Location], [Description]
VALUES ('test', 'test', 'still testing')

Based on this, I would suggest either making the fields nullable in the table (not really a great option in my opinion) OR using a "staging" table for the SqlBulkCopy process (where the fields are nullable and have a similar default constraint in place). Once the data is in the staging table execute a second statement to move the data into the actual final destination table.

Tim Lentine
  • 7,782
  • 5
  • 35
  • 40
  • Yes, I expect SQL Server to fill the column with the default value when a null is supplied. See: http://stackoverflow.com/a/4244132/3874334 This example demonstates fields that I will LIKELY always leave null, but I have other tables where I will have columns that contain data as well as nulls. – James Nix Nov 19 '14 at 22:20
  • @JamesNix That is not how SQL Server DEFAULTS work, with or without SqlBulkCopy. DEFAULTS are only applied when the column is *Not Specified*. If you specify NULL for a column, it will attempt to make it NULL (which will cause an error if it is declared NOT NULL), and the DEFAULT value will not be used. When the doc says "*... will observe any defaults..*" it means that DEFAULTs will be observed by SqlBulkCopy in the same way that they are observed by any other SQL command. – RBarryYoung Nov 20 '14 at 17:49