4

Is there a way to determine what the maximum size of a record would be in SQL Server past doing it by hand? For example:

CREATE TABLE test (
    id INT PRIMARY KEY IDENTITY(1, 1),
    name VARCHAR(256),
    test_date DATETIME
)

so, if I'm not mistaken, when calculating that by hand that record could be a maximum of 272 bytes. However, I have a table with a lot more columns than that, and I need to do this for more than one table, so I wanted to know if I could do this with a simple query.

I can't find any information in INFORMATION_SCHEMA.TABLES or even INFORMATION_SCHEMA.COLUMNS where I figured I could do a simple SUM for example. Further, sysobjects and syscolumns don't seem to have the necessary information. The syscolumns table does have a length field but that's not the actual storage size.

Thanks all!

Mike Perrenoud
  • 66,820
  • 29
  • 157
  • 232

1 Answers1

4

Try this:

Select  schema_name(T.schema_id) As SchemaName,
        T.Name As TableName,
        Sum(C.max_length) As RowSize
From    sys.tables T
        Inner Join sys.columns C
            ON T.object_id = C.Object_ID
        INNER JOIN sys.types S
            On C.system_type_id = S.system_type_Id
Group By schema_name(T.schema_id),
        T.Name
Order By schema_name(T.schema_id),
        T.Name
George Mastros
  • 24,112
  • 4
  • 51
  • 59
  • Works perfectly friend! I'll be accepting the answer in a couple minutes, after the waiting period. – Mike Perrenoud Jan 28 '13 at 13:22
  • There is a problem with nvarchar columns. The original code did not account for 2 bytes of storage per character. – George Mastros Jan 28 '13 at 13:26
  • There is another issue as well. If you have a table with columns of type text, ntext, varchar(max), nvarchar(max), image or varbinary(max), then the data is (not necessarily) stored in the row. Instead, there is a 16 byte pointer stored in the row that points to a location outside of the normal table data. – George Mastros Jan 28 '13 at 13:33
  • I don't actually have that scenario in the database I'm working in, so that should be okay. – Mike Perrenoud Jan 28 '13 at 13:35
  • Another issue: When using this I was getting unreasonably large results. Turns out that in SYS.TYPES, the [system_type_id] column is not unique. In my database there are duplicate values. As a result, there are duplicate entries in the result set, and these values are counted twice when summed. The [user_type_id] column, on the other hand, is unique. When I switched out [system_type_id] for [user_type_id] for the inner join between SYS.TYPES and SYS.COLUMNS, my results were much more reasonable. – Daniel Schealler Jul 13 '15 at 03:29