I'm using Entity Framework code first to access a set of tables whose key will be set by an int based SEQUENCE in a default constraint. EF seems to have trouble handling this, it insists on using SCOPE_IDENTITY after an insert to populate integer key fields.
Digging into the code, it looks kind of hard-coded:
See the IsValidScopeIdentityColumnType method a little over halfway down the page. If this method returns true the inserted Key value is retrieved with SCOPE_IDENTITY(), otherwise an OUTPUT clause is generated. (Guid/uniqueidentifier is the typical use case there).
// make sure it's a primitive type
if (typeUsage.EdmType.BuiltInTypeKind != BuiltInTypeKind.PrimitiveType)
{
return false;
}
// check if this is a supported primitive type (compare by name)
var typeName = typeUsage.EdmType.Name;
// integer types
if (typeName == "tinyint"
|| typeName == "smallint"
||
typeName == "int"
|| typeName == "bigint")
{
return true;
}
Is there any way to fool this method into returning false for an integral field? Once I start seeing things like 'EDMType' I'm beyond what I really understand about how the EF mapping really works. Maybe there's some way to use a user defined type to fool it? But it's really the configuration on the .NET side that needs some sort of update.
See also the UseGeneratedValuesVariable method in that same file for where this is used...
It's not clear to me why OUTPUT isn't just used across the board here -- maybe performance?