3

I'm running up against a problem with SQL Server Express that I've never encountered before.

I have a table with 32 Million rows in it, with this column set:

[Insp_ID] [nvarchar](8) NOT NULL,
[NodeID] [int] NULL,
[NodeVisible] [bit] NULL,
[Note] [nvarchar](max) NULL,
[Points] [int] NULL,
[QST_ID] [int] NULL,
[QST_Num] [int] NOT NULL,
[Response_Num] [int] NOT NULL,
[Value] [nvarchar](max) NULL,
[ValueID] [int] NULL,
[ValueVisible] [bit] NULL,

CONSTRAINT [QST2D_PK_i83] PRIMARY KEY CLUSTERED 
([Insp_ID] ASC, [QST_Num] ASC, [Response_Num] ASC)

The table is roughly 1900 MB, or maybe even a bit larger. It's hard to tell because I can do hardly any maintenance operations on the table without getting this error:

Msg 802 There is insufficient memory available in the buffer pool.

As I understand it, SQL Server Express gets 1GB for that. This happens when I try to change the primary key or do a DBCC DBREINDEX. The only way I can get the info out of the database and back in is by using BCP, and that is very inconvenient (however, it's interesting that it works). BCP allows me to restructure the table (i.e. PK) and then bring the data back in.

Anyway, further experimentation: I dropped the row count down to 8.2M, and the overall table size to about 633MB. Still get the same errors, not enough memory. This is baffling, because that doesn't seem like a lot of data to me.

At that point, I dropped the two nvarchar(max) columns which further reduced the table size to about 540MB. At that point, I no longer run out of memory.

I can't tell if the buffer is complaining about row count or table size, but it's feeling like table size based on this anecdotal evidence.

Does anyone have a solution or insight into this? I feel like we are barking up the wrong tree using SQL Server Express - which is too bad, because up to this point it's fit our needs perfectly.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
John Doh
  • 249
  • 6
  • 15
  • A little more info: same database has another table, slightly over 1GB in size, with 103,000 rows. Was able to reindex this one with no problem. So, I don't know how the buffer pool uses memory, but now it feels like a combination of table size (total MBs) and row count. – John Doh Apr 27 '15 at 21:22
  • 1
    Remember that SQL Express is capped at 1 GB of RAM – ErikEJ Apr 28 '15 at 04:41

2 Answers2

1

From past experience, SQL Server Express has a cap of 1GB of RAM, and also a database size restriction, which depends on the version you're using.

There was a previous thread on SO about the Limitations of SQL Server Express which might help for general questions.

While taking a nosy on MSDN, I found a nice little patch of text, which I believe might help you:

SQL Server needs enough memory to hold the data in memory-optimized tables and indexes. To account for row versions, you should provide an amount of memory that is two times the expected size of memory-optimized tables and indexes. But the actual amount of memory needed will depend on your workload. You should monitor your memory usage and make adjustments as needed. The size of data in memory-optimized tables must not exceed the allowed percentage of the pool.

To discover the size of a memory-optimized table, see here for how to grab it (for SQL 2014).

Other articles which could be useful are Max rows in a SQL table? or Maximum Capacity Specifications for SQL Server.

Community
  • 1
  • 1
Ilessa
  • 602
  • 8
  • 27
1

I asked a similar question over on the Microsoft forum and was told that DBCC DBREINDEX is not supported on 2014. I find this odd, since it works... sometimes. But, I can't discount language barrier. And, each time I have tried a similar statement:

ALTER INDEX ALL ON QST2D REBUILD;

It has worked. Even though when I run this query:

SELECT
 CASE
 WHEN database_id = 32767 THEN 'mssqlsystemresource'
 ELSE DB_NAME(database_id)
 END AS [Database],
 CONVERT(numeric(38,2),(8.0 / 1024) * COUNT(*)) AS [MB in buffer cache] 
FROM sys.dm_os_buffer_descriptors 
GROUP BY database_id 
ORDER BY 2 DESC; 

It uses quite a bit of memory on that database:

enter image description here

I kept refreshing the query while the rebuild happened, and I saw it stealing memory from the other databases until they were left with a pittance. I am surprised the statement gets away with using that much memory, however I don't know enough about SQL's memory management to tell if that is just buffer pool, or many other pools (possibly including buffer pool). I am just happy it works, and seems to be reliable so far.

The point is, this worked on a 2+GB table with 31M+ rows and a 3 field composite key. That's more like what I expected. I still have no clear understanding why DBCC DBREINDEX fails but alter index does not, when I believe they do similar things. It may be how it is done, i.e. one atomic operation rather than a separate drop + create. Perhaps the combined operation uses more resources in one shot. I may try to trace the database while this is happening just for my own edification, but since I seem to have resolved my immediate problem, I think I should mark this answered.

John Doh
  • 249
  • 6
  • 15