0

I'm a report analyst and "newbie" to SQL Server. I have a view joining 2 tables with 100 columns - 50 are NVARCHAR(255) Null default, all others are int or float.

Table can go up to 1 million rows and performance is horrible when extracting the data using SQL Server, takes an hour and runs out of memory. Most of my NVARCHAR(255) are actually limited to 10 & 50 bytes in the originating table so 255 not needed. Excluding any SQL join issues, is there any performance benefit to reducing the column sizes of the 50 to their true limits?

I have seen varying data in the forum for this: some articles indicate I'd get some page size performance benefit by reducing this so data is "IN-ROW".

Conversely I've read where the NVARCHAR(255) only extracts the actual data size of 10 or 50 therefre this cannot be causing performance issue.

Please help?

Thank you all!

Rebelle

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
  • Welcome to SO! Unrelated, but a suggestion: that sounds like a *lot* of data to be contained in so few tables - consider looking into [database normalization](https://en.wikipedia.org/wiki/Database_normalization) if/when you have time – Conduit May 27 '16 at 13:45

1 Answers1

0

Based on what I was able find looking around on StackOverFlow I would recommend you adjust the NVARCHAR to a more reasonable number while still considering future needs.

The below link goes into a lot of detail on the topic. I would sum it up like this-- Some parts of SQL use the maximum possible size when determining memory needed.

is-there-an-advantage-to-varchar500-over-varchar8000

Community
  • 1
  • 1
Chad Portman
  • 1,134
  • 4
  • 12
  • 38