1

I have the following table in SQL Server 2012, Web Edition:

CREATE TABLE [dbo].[MyTable] 
(
     [Id]               INT            IDENTITY (1, 1) NOT NULL,
     [Created]          DATETIME       DEFAULT (getdate()) NOT NULL,
     [RefId]            INT            NULL,
     [Name]             NVARCHAR (128) NULL,
     [Email]            NVARCHAR (128) NULL,
     [ImageUrl]         NVARCHAR (256) NULL,
     [Url]              VARCHAR (256)  NULL,
     [Age]              TINYINT        NULL,
     [Country]          VARCHAR (6)    NULL,
     [Location]         NVARCHAR (192) NULL,
     [People]           INT            NULL,
     [Categories]       NVARCHAR (128) NULL,
     [Block]            BIT            DEFAULT ((0)) NOT NULL,
     [GeneratedRevenue] INT            NULL,
     [IsFemale]         BIT            DEFAULT ((1)) NULL,
     [HasInstalled]     BIT            NULL,
     [Keywords]         VARCHAR (128)  NULL,
     [Brands]           NVARCHAR (512) NULL,
     [Source]           TINYINT        NULL,
     [Alias]            VARCHAR (65)   NULL,

     PRIMARY KEY CLUSTERED ([Id] ASC)
);

As far as I gather, the total size should be 3175 bytes; but I regularly get the following error, when updating the table:

Cannot create a row of size 8068 which is greater than the allowable maximum row size of 8060.

How does the above result in a row size of 8068?

Edit: I should mention that this table has been altered, uses Change Tracking and has four indexes.

Also, if I copy the contents to a new table with the same definition, no errors occur for a while, but do come back.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Scarabas
  • 103
  • 1
  • 1
  • 7
  • Perhaps you have a trigger on the insert and the problem is happening on another table. – Gordon Linoff Aug 15 '17 at 13:51
  • 1
    Just guessing is this entirely new table or it was altered? It looks like in the past there was different type (maybe NVARCHAR(MAX)). – Lukasz Szozda Aug 15 '17 at 13:51
  • Or an indexed view? – Damien_The_Unbeliever Aug 15 '17 at 13:51
  • I edited my question above to answer your questions. @lad2025, why does it look like there was a different size in the past? – Scarabas Aug 15 '17 at 13:56
  • Where'd you come up with the 3175 number? I counted and got 3641. Also, do you have a view which joins this to another table? Perhaps it is the combination of data which is over the top. – Eli Aug 15 '17 at 14:02
  • See http://rusanu.com/2011/10/20/sql-server-table-columns-under-the-hood/ – Remus Rusanu Aug 15 '17 at 14:04
  • @Eli I may have gotten something wrong when calculating; my point was that I couldn't get near the 8060 boundary. I get this error when doing a simple update on a single field, no joins involved. – Scarabas Aug 15 '17 at 14:06
  • @Scarabas understood. – Eli Aug 15 '17 at 14:10
  • Check out the following link: https://technet.microsoft.com/en-us/library/ms345371%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396 – Eli Aug 15 '17 at 14:11
  • What do you get if you run the following query? `SELECT sc.name, sipc.leaf_offset, sipc.max_inrow_length FROM sys.partitions sp JOIN sys.system_internals_partition_columns sipc ON sp.partition_id = sipc.partition_id JOIN sys.columns sc ON sc.column_id = sipc.partition_column_id AND sc.OBJECT_ID = sp.OBJECT_ID WHERE sp.OBJECT_ID = OBJECT_ID('MyTable')` – sgmoore Aug 15 '17 at 14:17
  • I didn't think that SQL Server 2012 *had* this limit? Checking the [documentation](https://technet.microsoft.com/en-us/library/ms186981(v=sql.105).aspx), I think it's actually okay for a table with VARCHAR and NVARCHAR to exceed this limit. Do you have a clustered index that includes any of your character columns? – Matt Gibson Aug 15 '17 at 14:19
  • @sgmoore I get roughly the same columns as for table creation, a few of them appear more than once. The total max_inrow_length is 3965. – Scarabas Aug 15 '17 at 14:45
  • @MattGibson I do have an index on Name and Email, but even with that on top of the numbers above, I can't get to the 8060 boundary. – Scarabas Aug 15 '17 at 14:48

1 Answers1

5

You say you use Change Tracking. Are you - by any chance - ignoring the versioning part of Change Tracking, and resetting the entries by doing the following?

ALTER TABLE dbo.MyTable disable change_tracking
ALTER TABLE dbo.MyTable enable change_tracking

If so, you may have a suspect. Change Tracking adds an 8 bit column behind the scenes every time you reenable Change Tracking, which is dropped if it already exists. Since dropping a column is just a meta operation, you may have a large number of dropped 8 bit columns lurking behind the scenes, depending on the frequency with which you reenable Change Tracking.

To check this, look at the system_internals_partition_columns view and see if you have a large number of is_dropped colums. There could be more reasons for having many of those, but this way of using Change Tracking is one of them.

I see Remus Rusanu is linking to a good article in a comment (rusanu.com/2011/10/20/sql-server-table-columns-under-the-hoo‌​d): the queries he lists should be what you need to see if the above is the case.

Edit: In case you need to delete the dropped columns, you can rebuild the clustered index for the table(s) that have many dropped columns. This means that rebuilding the clustered index for MyTable will relieve you of your symptom.

TT.
  • 15,774
  • 6
  • 47
  • 88
The_Torst
  • 401
  • 3
  • 10
  • I do use Change Tracking this way, yes. Initially to avoid storing unnecessary versions, since the volume is huge. I have ~64000 is_dropped entries in the queries you're linking to, and they grow every time I enable CT. I guess I have to use a different approach. Thanks man!!! – Scarabas Aug 15 '17 at 14:52