I do a lot of work with table variables before finally presenting result sets to the user. For example, I might pull in a whole load of columns from many different tables like this:
DECLARE @tmp TABLE
(
ID int,
username nvarchar(50), -- data taken from tbl_Users
site nvarchar(50), -- data taken from tbl_Sites
region nvarchar(100), -- data taken from tbl_Regions
currency nvarchar(5) -- data taken from tbl_Currencies
)
I spend a lot of time looking through the Object Explorer to ensure that the data length of the columns in correct (matches the original tables). Sometimes if I change the table schema but don't update all the procedures, I might get caught out with a truncation error.
Is there any issue with taking a lazy approach and just doing this:
DECLARE @tmp TABLE
(
ID int,
username nvarchar(max),
site nvarchar(max),
region nvarchar(max),
currency nvarchar(max)
)
Does nvarchar(max)
actually use up any more memory or this the allocated memory based on the data size? Are there any other gotchas?
Please note that I am aware of third-party tools for jumping to the definition, but that's not what I'm asking.
UPDATE
The duplicate question has value, but the question is not identical IMHO. The duplicate revolves around the design of actual tables, not table variables. However, there is some merit in the answers, namely:
- nvarchar(max) vs nvarchar(8000) are no different in resource usage until 8000+ data lengths
- Business logic layers rely on structure and meaningful data, so specifying a column size that compliments the original provides value
In that sense, it would seem that it is fine to use nvarchar(max)
in table variables instead of nvarchar(n)
, but it has a reliability and performance risk in some environments. If you think this should be deleted then fair enough (but please stop arguing I appreciate all input!)