17

Currently working on an ASP.Net MVC 4 application using Entity Framework 5. Used CodeFirst for initial development phase. But have now disabled the Automatic Migrations and designing new tables directly using SSMS and writing POCO. Everything is working good.

Recently, identified a weird issue in Production. The records in one of the initially designed tables skipped auto-increment identity value by more than 900 numbers. This has happened 3 times within last 3 months. Debugged the application locally but could not reproduce. There isn't any pattern or trend observed.

Model:

public class Enquiry
{
    [Key]
    [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)]
    public Int64 EnquiryId { get; set; }

    [Required]
    public int UserId { get; set; }

    [Required]
    public byte Bid { get; set; }

    ...

    [Required]
    public DateTime Created { get; set; }

    [Required]
    public DateTime Modified { get; set; }
}

public class EnquiryDetail
{
    [Key]
    public Int64 EnquiryId { get; set; }

    [Required]
    public int CreditScore { get; set; }

    [Required]
    public byte BidMode { get; set; }

    public virtual Enquiry Enquiry { get; set; }
}

DBContext:

public class EscrowDb : DbContext
{

    public EscrowDb()
        : base("name=DefaultConnection")
    {

    }
    public DbSet<Enquiry> Enquiries { get; set; }
    public DbSet<EnquiryDetail> EnquiryDetails { get; set; }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Conventions.Remove<PluralizingTableNameConvention>();
        modelBuilder.Entity<EnquiryDetail>()
            .HasRequired<Enquiry>(ed => ed.Enquiry)
            .WithRequiredDependent(e => e.EnquiryDetail);
    }
}

Controller:

[Authorize]
public class EnquiryController : Controller
{
    private EscrowDb _db = new EscrowDb();

    [HttpPost]
    [ValidateAntiForgeryToken]
    public ActionResult Create(EnquiryViewModel core)
    {
       var enquiry = new Enquiry();
       // Some code to set properties using passed ViewModel
       ...

       var enquiryDetail = new EnquiryDetail();
       // Some code to set properties using passed ViewModel
       ...

       enquiry.EnquiryDetail = enquiryDetail;

       _db.Enquiries.Add(enquiry);
       _db.SaveChanges();
    }
}

All this code has been working fine so far except the identity value getting skipped sporadically by large gaps of almost 1000 numbers.

Has anybody come across such kind of issue? Please share your thoughts.

Ben Tidman
  • 2,129
  • 17
  • 30
Santosh
  • 2,430
  • 5
  • 33
  • 47
  • Look here: https://connect.microsoft.com/SQLServer/feedback/details/739013/alwayson-failover-results-in-reseed-of-identity – Sławomir Rosiek Jun 09 '13 at 19:53
  • @Rosiek, I don't see any good ready-to-use solution there that can be applied to SQL Azure Web Edition in the MS Cloud. – Santosh Jun 10 '13 at 05:27
  • I'm seeing the same issue. Any luck figuring this out? – Andrew Garrison Jun 11 '13 at 17:30
  • 1
    Experienced the same problem - auto-incremented field values suddenly jump by around 1000. Not only that, received primary key violation once (key is based on auto-incremented field). Never had any issues with on-premise database. – Igor Jul 05 '13 at 15:42
  • Does this answer your question? [Identity increment is jumping in SQL Server database](https://stackoverflow.com/questions/14146148/identity-increment-is-jumping-in-sql-server-database) – Michael Freidgeim Apr 19 '23 at 06:59

5 Answers5

19

You may be out of luck here if you need to eliminate these gaps.

I hit this issue myself as I am developing/testing a new application. I'm intuiting what's happening here in sql azure based on what I've read about sql server 2012. I have not been able to find any documentation about this for sql azure.

From what I've read this is a feature that comes across as a bug IMO. In Sql server 2012 Microsoft added the ability to create sequences. Sequences record what values have been used in blocks of 1000. So lets say your sequence was progressing... 1, 2, 3, 4, 5... and then your sql server restarts. Well the sequence has already saved the fact that the block 1-1000 have already been used so it jumps you to the next 1000.... so your next value are 1001, 1002, 1003, 1004.... This improves performance of inserts when using sequences, but can result in unusual gaps. There is a solution to this for your sequence. When specifying you sequence add the "NOCACHE" parameter so that it doesn't save blocks of 1000 at a time. See here for more documentation.

Where this becomes an issue is that the Identity columns seem to have been changed to use this same paradigm. So when your server, or in this case your sql azure instance restarts you can get large gaps (1000's) in your identity columns because it is caching large blocks as "used". There is a solution to this for sql server 2012. You can specify a startup flag t272 to revert your identity to using the old sql server 2008 r2 paradigm. The problem is that I'm unaware (it may not be possible) of how to specify this in sql Azure. Can't find documentation. See this thread for more details on sql server 2012.

Check the documentation of identity here in the msdn. Specifically the section "Consecutive values after server restart or other failures". Here is what it says:

Consecutive values after server restart or other failures –SQL Server might cache identity values for performance reasons and some of the assigned values can be lost during a database failure or server restart. This can result in gaps in the identity value upon insert. If gaps are not acceptable then the application should use a sequence generator with the NOCACHE option or use their own mechanism to generate key values.

So if you need to have consecutive values you could try specifying a sequence with nocache instead of relying on your identity column. Haven't tried this myself, but sounds like you'll have trouble getting this to work with entity framework.

Sorry if this doesn't help much, but at least it's some info on what your experiencing.

Ben Tidman
  • 2,129
  • 17
  • 30
1

Try the reseeding with trigger approach. I believe this should solve it example of its use and see more walkarounds at that link.

USE [TEST]

CREATE TABLE TEST(ID INT IDENTITY(1,1),VAL VARCHAR(10))

CREATE TRIGGER TGR_TEST_IDENTITY ON TEST
FOR INSERT
AS
DECLARE @RESEEDVAL INT
SELECT @RESEEDVAL = MAX(ID) FROM TEST
DBCC CHECKIDENT('TEST', RESEED, @RESEEDVAL)

INSERT INTO TEST(VAL)VALUES('a')

SELECT * FROM TEST

Since 'DBCC CHECKIDENT' is not supported in Azure now you can use the approach in this link In that link i got some work arounds

  1. Use GUID as key when using auto key of your SqlAzure
  2. If integer key like my case let the record insert and go back and delete it and re insert it with the right key by Turing off identity with set identity_insert XXXTable on -- this basically turns off IDENTITY

and then turning back on identity again when i am through with the insertion with the right key using

set identity_insert XXXTable off --this basically turns on IDENTITY

Note: this is not a good solution for a table that is receiving a massive insert request but might be useful for someone looking for a temporary way out

  • this is magic for a server that gets restarted often and a table that seldom gets rows! thanks, been all over the interwebz looking for a quick & dirty – user230910 Jun 09 '15 at 18:40
0

It seems there is no TF 272 work around for SQL Azure. I just noticed the issue in 2 tables (gaps of 999 and 1000) and thought it was a security breach before inspecting the two tables and checking inserted records. See the last item of this MS TechNet discussion for details. Kind of re-assuring, but looks more like a bug than a feature.

AKhooli
  • 1,285
  • 1
  • 13
  • 11
0

I had this problem too and until this time i can not find any way, it seems entity has a bug or something like this. i search on internet but fount nothing

motevalizadeh
  • 5,244
  • 14
  • 61
  • 108
0

This can happen easy. Beside server issues, this can be normal in application logic. Do not expect to make too much logic for identity values. But make your own numbers if you need them to mean something.

A very common reason I see this, is for rolled back transactions, which is good. I See the sample to reproduce in SQL server. This would, expectedly, affect anything such as MVC or entity framework using the database:

DROP TABLE IF EXISTS #PKIDTest
CREATE TABLE #PKIDTest (PKID INT IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED, Msg VARCHAR(100))

BEGIN TRANSACTION 
    INSERT INTO #PKIDTest (Msg) VALUES ('The'),('quick'),('brown fox'),('jumped'),('over')
COMMIT

SELECT * FROM #PKIDTest --PKID: 1-5

BEGIN TRANSACTION 
    INSERT INTO #PKIDTest (Msg) VALUES ('the'),('alzy'),('dog')
    --PKID 6-8
ROLLBACK --"spelling error"

SELECT * FROM #PKIDTest --PKID: 1-5 (6-8 rolled back)

BEGIN TRANSACTION 
    INSERT INTO #PKIDTest (Msg) VALUES ('the'),('lazy'),('dog')
COMMIT

SELECT * FROM #PKIDTest --PKID: 1-5 , 9-11 (6-8 rolled back)

If you are seeing this behavior and do not expect to have rolled back transactions or removed rows, consider checking your error handling code to see if you are logging the errors. It is good to have a way to log when unexpected errors occur.

If you have a logging solution in place, consider trying to correlate the timeframes that the "skipped" values occur, and events in your log.

Sean
  • 91
  • 4