2

In all of my database tables on my MS SQL Server, the id column jumps by 1000 or more each time I restart the server. Since this is a local application and the server is being hosted from one of my laptops, I shut it down every day which in turn shuts down the server.

Eventually, this is program is going to be used in a large scale setting, and I am concerned by this behavior -- doing the math, I'd still have over 4 million unique ids if it jumped by 1000 every time a new one was entered, but the plan is for this database to grow for years and years. What will happen if I run out of unique ids? BigInt also experiences this problem, but with increasing by 10,000 instead of 1000. I've looked all over the internet and it seems that many people have fixed this issue by setting -t272 as a start up parameter, but it has not worked for me. Can anyone tell me a work around?

itman1234
  • 57
  • 5

2 Answers2

5

Don't worry about it.

You won't be running the production server on your laptop and shutting it down every day I assume?

For virtually any production server you aren't going to be restarting the SQL Server service frequently enough to make any significant dent in the range of numbers allowed by int.

Even if you "lose" the whole 1,000 every time you restart this is still only 0.0000465661% of the range from 1 to 2147483647. You may need bigint if your application is busy enough but it is exceedingly unlikely that this will be the sole reason and you wouldn't have ended up needing it anyway in that case.

As an example suppose that you restart the service every day for four years and so lose 4 * 365 * 1,000 = 1,460,000 possible values.

If in the same time period you overflowed int and needed BIGINT that would mean you were doing (2147483647 - 1460000) / (4 * 365) = 1,469,879 inserts a day. So the issue would have costs you less than a days worth of inserts before you would have needed BIGINT anyway.

Martin Smith
  • 438,706
  • 87
  • 741
  • 845
  • Here's how I'm thinking about it: I have a table for people, and then a table connecting people to a measurement, and each person is likely to have around 100 records in this table. If it jumps by 1000, I'm losing space for 10 people's measurements, and if that keeps happening, it'll actually add up to quite a bit. I know the server is likely to not be restarted when it actually gets off the ground, but I would just rather not have this happen at all just in case -- is there a work around besides the -t272 flag that you know of? – itman1234 Sep 10 '16 at 22:19
  • 1
    Well don't think about it like that then. Think about it logically and with much less woolly terms than "adding up to quite a bit". Even if you were to restart the server every day for 3 years it would only add up to `3 * 365 * 1000 = 1095000` range consumed by the issue. If in the same time period you overflowed int and needed BIGINT that would mean you were doing `(2147483647 - 1095000) / (3 * 365) = 1960172` inserts a day. So at most the issue costs you less than a days worth of inserts before you would have needed BIGINT anyway. – Martin Smith Sep 10 '16 at 22:29
  • Thank you, I see that this isn't really much of an issue now. – itman1234 Sep 10 '16 at 23:49
0
ALTER DATABASE SCOPED CONFIGURATION SET IDENTITY_CACHE = OFF.

Just run this command and it will stop incrementing by 1000

Birhan Nega
  • 663
  • 12
  • 24