0

I've always been bothered by the need for max lengths on SQL string columns. There is some data for which there is no true max length. For example, let's say you have a field to store someone's first name, and you make it NVARCHAR(50). It's always possible (although highly unlikely) that someone has a name longer than 50 chars.

Would it be feasible to change the field's max length on the fly? What I mean by this is, when you do an INSERT/UPDATE, you check if the person's name is longer than 50 chars, and ALTER the table if need be before you do the INSERT/UPDATE. (Or perhaps, catch an exception and perform an ALTER if need be).

Would the ALTER be a slow operation if the table had a lot of data in it?

Let's say you alter the column to be NVARCHAR(100). Would a SELECT from this table be slower than if you'd made it NVARCHAR(100) from the beginning?

Christoph
  • 47,569
  • 8
  • 87
  • 187
user2410449
  • 373
  • 1
  • 2
  • 11
  • 3
    Altering your database schema on the fly is a terrible idea. Make the field as big as it reasonably should be and have your code responsible for validating inputs. – Daniel Mann Aug 26 '13 at 21:28
  • Choose good defaults. This thread question has some thoughtful answers: http://stackoverflow.com/questions/919871/how-big-do-you-make-your-nvarchar – dcaswell Aug 26 '13 at 21:29
  • Which DBMS are you using? Oracle? Postgres? Another option is to define the columns as long as possible and then use a check constraint to ensure the business rules are kept. Changing the check constraint does not impose any physical change to the data –  Aug 26 '13 at 21:36

1 Answers1

3

From the use of nvarchar(), I am guessing that you are using SQL Server.

In SQL Server, I usually just make such names varchar(255) (or nvarchar(255)). This is actually a weird anachronism of affinity to powers of 2 (255 = largest 8-bit unsigned value). But it works well in practice.

You have at least two considerations in SQL Server. First, the maximum "not-maximum" length of a string is 8000 data bytes, which is varchar(8000) or nvarchar(4000). These are reasonable maximum lengths as well.

The second consideration is the length of a key in an index. The maximum length is 900 bytes. If you have an index on the field, then you don't want the field longer than this. If you have composite indexes, then 255 seems like a reasonable length.

The key point, however, is that changing the length of a field could have effects you are not thinking of, such as on the index. You don't want to change the structure of a table lightly. Instead, just make the field oversized to begin with, and forget about the problem entirely.

Gordon Linoff
  • 1,242,037
  • 58
  • 646
  • 786
  • Note, that for some operations like hashing and sorting SQL Server estimates the avg. data size as columnlength/2... which is horrible. Goes wrong for nvarchar(4000). – usr Aug 26 '13 at 22:54