4

Is there a clean way to determine the row size of a table before adding a new column to it and not go over the 8060 byte limit?

For example, if the table row length is currently 8055 bytes, and I want to add a datetime (8 bytes) this will go over as it will go to 8063 bytes (excluding the null mapping).

However, if I add an integer (4 bytes), this will give it 8059 bytes, which fits inside the table size (excluding the null mapping).

I am currently able to get the size of the table, however I find it difficult to create an SQL functon to get the size of a datatype (int, datetime etc) and whether the datatype needs an extra byte for the null mapping.

Is there a function I can use/develop that accepts two variables: tablename and datatype: fnIsEnoughStorage('table1',int) and returns a boolean (yes/no) from a scalar function.

if true, i will proceed with the ALTER TABLE command after the test is determined.

OMG Ponies
  • 325,700
  • 82
  • 523
  • 502
Resurface
  • 43
  • 1
  • 3

2 Answers2

7

This query here will determine which tables are potentitally dangerous and have a possible maximum row size that would exceed the 8060 bytes available:

;WITH TableRowSizes AS
(
    SELECT 
        t.NAME 'TableName',
        COUNT(1) 'NumberOfColumns',
        SUM (c.max_length) 'MaxRowLength'
    FROM   
        sys.columns c
    INNER JOIN 
        sys.tables t ON c.object_id = t.object_id
    WHERE
        c.user_type_id NOT IN (98, 165, 167, 231)  -- sql_variant, varbinary, varchar, nvarchar
    GROUP BY 
        t.name
)
SELECT *
FROM TableRowSizes
WHERE MaxRowLength > 8060
ORDER BY MaxRowLength DESC

This doesn't mean your rows are actually using more than 8060 bytes - it just sums up the possible maximum size for each column.

If you want to determine the actual currently used size, you probably can do something similar by examining the DATALENGTH(colname) function (instead of using the theoretical max value from sys.columns)

Update: added a WHERE clause to my CTE SELECT based on gbn's response - those types should not be used in determining if the row potentially breaks the 8060 bytes size limit.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
  • Need to add 2 per variable length + null bitmap + maybe rowversioning pointer. Otherwise this will be misleading. – gbn Sep 02 '11 at 05:01
  • @gbn: yeah, I know - but that gets really messy quite quickly - it's an approximation for now, but usually close enough. – marc_s Sep 02 '11 at 05:03
  • thanks, a best approach is probably the most practical way. I believe the most elegant approach would be to test how much space I have left in the table, and say if it is less than say 32 bytes, prevent the column from being added. – Resurface Sep 02 '11 at 06:44
3

Rows can break the 8060 byte limit because varchar/nvarchar columns can overflow.

You'll get a warning but it's allowed.

If you are breaking the 8060 byte limit with fixed length olumns then you have to partition vertically (eg 1:1 tables).

Note that your new int column doesn't go "at the end": the on-disk structure is well defined and data will be moved to accommodate a new fixed length field: One Two

Community
  • 1
  • 1
gbn
  • 422,506
  • 82
  • 585
  • 676
  • I need to adapt my script in that case..... is there a way to find out which columns of a table are "overflowable"? I can't seem to find any indication of that property in `sys.columns` or `sys.types` (I looked for `is_variable` or something like that....) – marc_s Sep 02 '11 at 04:57
  • @marc_s: Only way I know is from the 1st link I gave: varchar, nvarchar, varbinary, sql_variant – gbn Sep 02 '11 at 04:59
  • Thanks ! I've updated my script to reflect this additional information. – marc_s Sep 02 '11 at 05:02
  • Hi gbn, I am aware that the variable length fields do not count towards the 8060 limit, however adding a fixed length column does which is what I am trying to prevent to go this limit and throwing a SQL error. – Resurface Sep 02 '11 at 06:31
  • In hindsight, creating a second 1:1 table does achieve the creation of the column where required, however practically I don't believe I will ever reach such as limitation in the first place. I believe the most elegant approach would be to test how much space I have left in the table, and say if it is less than say 32 bytes, prevent the column from being added. – Resurface Sep 02 '11 at 06:42