I'm running up against a problem with SQL Server Express that I've never encountered before.
I have a table with 32 Million rows in it, with this column set:
[Insp_ID] [nvarchar](8) NOT NULL,
[NodeID] [int] NULL,
[NodeVisible] [bit] NULL,
[Note] [nvarchar](max) NULL,
[Points] [int] NULL,
[QST_ID] [int] NULL,
[QST_Num] [int] NOT NULL,
[Response_Num] [int] NOT NULL,
[Value] [nvarchar](max) NULL,
[ValueID] [int] NULL,
[ValueVisible] [bit] NULL,
CONSTRAINT [QST2D_PK_i83] PRIMARY KEY CLUSTERED
([Insp_ID] ASC, [QST_Num] ASC, [Response_Num] ASC)
The table is roughly 1900 MB, or maybe even a bit larger. It's hard to tell because I can do hardly any maintenance operations on the table without getting this error:
Msg 802 There is insufficient memory available in the buffer pool.
As I understand it, SQL Server Express gets 1GB for that. This happens when I try to change the primary key or do a DBCC DBREINDEX
. The only way I can get the info out of the database and back in is by using BCP, and that is very inconvenient (however, it's interesting that it works). BCP allows me to restructure the table (i.e. PK) and then bring the data back in.
Anyway, further experimentation: I dropped the row count down to 8.2M, and the overall table size to about 633MB. Still get the same errors, not enough memory. This is baffling, because that doesn't seem like a lot of data to me.
At that point, I dropped the two nvarchar(max)
columns which further reduced the table size to about 540MB. At that point, I no longer run out of memory.
I can't tell if the buffer is complaining about row count or table size, but it's feeling like table size based on this anecdotal evidence.
Does anyone have a solution or insight into this? I feel like we are barking up the wrong tree using SQL Server Express - which is too bad, because up to this point it's fit our needs perfectly.