0

In one of our project our manager insists on using varchar in terms bytes, like varchar(8), varchar(16), varchar(32), varchar(64) etc.

Is this someway better than using varchar(10), varchar(11) etc if we can guess the size, although we may not know the actual size?

Ron
  • 227
  • 1
  • 5
  • 11
  • 2
    take look at: http://stackoverflow.com/questions/8295131/best-practise-for-sql-varchar-column-length and http://stackoverflow.com/questions/11499756/sql-server-varchar50-and-varchar128-performance-difference including duplicates – bummi Sep 19 '13 at 21:52

1 Answers1

0

Suppose you want to store say a product code in a table. At present all the product codes you've got are 5 characters or less - but this may change in the future. You can't see it ever being more than 10, but who knows?

You can create it as varchar(10), but if you're wrong, then at some future point in time you may have to change it. If you create it as varchar(20), the problem doesnt exist;

So are you wasting space declaring it as varchar(20) instead of varchar(10)? No, because SQL server will only use as many characters as the column contains (which at this time is 5): all the datatype varchar(n) really means is "variable length text up to a maximum of n characters long".

So provided the defined length is at least as much as you foreseeably need, I don't think it matters overmuch exactly what the defined size is.

John Bingham
  • 1,996
  • 1
  • 12
  • 15
  • Thanks John, I guess I was looking for more detailed performance comparison and references, which the above comment provided. – Ron Sep 20 '13 at 14:40