4

I am trying to create and load a temporary mySQL table into memory using the following syntax but am running into a "Table is full" error:

CREATE TEMPORARY TABLE IF NOT EXISTS tmpHistory ENGINE=MEMORY SELECT * FROM history ORDER BY date ASC;

My original history InnoDB table has about 3m rows and about 300mb. I've increased both of the following server variables from their 16mb default values:

max_heap_table_size = 536870912

tmp_table_size = 536870912

I'm running mySQL on AWS r3.xlarge which is a 4-core box with 30.5GB of RAM.

I've reviewed this SO guidance but am still running into the Table is Full error. I'm new to using the Memory Engine, so any suggestions are appreciated.

Community
  • 1
  • 1
Gunnar
  • 661
  • 1
  • 11
  • 29

2 Answers2

3

max_heap_table_size, not tmp_table_size controls the maximum size of any subsequent MEMORY table.

MEMORY has several quirks. Perhaps this one bit you: VARCHARs are turned into CHARs. So a VARCHAR(255) takes 765 bytes for each row if it is CHARACTER SET utf8.

Why do you want to copy a perfectly good InnoDB table into a MEMORY table? To avoid disk hits? No... If innodb_buffer_pool_size is big enough, the InnoDB table will effectively live in RAM. To speed things up? Not necessarily, because InnoDB has row locks but MEMORY has only table locks.

Please provide SHOW CREATE TABLE; there may be other things to argue against what you are doing and/or explain why you got the error.

Rick James
  • 135,179
  • 13
  • 127
  • 222
  • Thank you for your comment. I should have mentioned that my history table is partitioned into years and this temp table is loading one of these years for read-only access. I refresh this temp table once per day. I am also sightly embarrassed to admit that I was trying to run this statement from Sequel Pro which returned the error. Running from command line successfully created the temporary table in memory. My initial tests of query response rate against this new temp memory table is 90% improvement. – Gunnar Apr 27 '16 at 22:20
  • Reloading only once a day? May as well give the space that you would have given to MEMORY instead to the buffer_pool – Rick James May 03 '16 at 23:29
1

Find out what all tables you have created using engine=memory

select table_name,
round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB` 
from information_schema.tables 
where table_schema = 'db1' and engine = 'memory';

Use the top command to see what process is consuming what amount of memory. In case of very large tables, avoid using memory. Think about a column-store like Amazon RedShift.

MontyPython
  • 2,906
  • 11
  • 37
  • 58