147

I'm trying to add a row to an InnoDB table with a simply query:

INSERT INTO zip_codes (zip_code, city) VALUES ('90210', 'Beverly Hills');

But when I attempt this query, I get the following:

ERROR 1114 (HY000): The table zip_codes is full

Doing a

SELECT COUNT(*) FROM zip_codes

gives me 188,959 rows, which doesn't seem like too many considering I have another table with 810,635 rows in that same database.

I am fairly inexperienced with the InnoDB engine and never experienced this issue with MyISAM. What are some of the potential problems here ?

EDIT: This only occurs when adding a row to the zip_codes table.

Mahesh Waghmare
  • 726
  • 9
  • 27
Wickethewok
  • 6,534
  • 11
  • 42
  • 40

23 Answers23

119

EDIT: First check, if you did not run out of disk-space, before resolving to the configuration-related resolution.

You seem to have a too low maximum size for your innodb_data_file_path in your my.cnf, In this example

innodb_data_file_path = ibdata1:10M:autoextend:max:512M

you cannot host more than 512MB of data in all innodb tables combined.

Maybe you should switch to an innodb-per-table scheme using innodb_file_per_table.

Martin C.
  • 12,140
  • 7
  • 40
  • 52
  • C where we get this my.cnf file in ubuntu –  Jun 16 '17 at 07:42
  • 3
    @Nadh In Ubuntu 16.04 it's part of `/etc/mysql/` and is partly splitted into additional files in `/etc/mysql/conf.d` – Martin C. Jun 17 '17 at 22:01
  • Mine worked after adding the `innodb_data_file_path` line to `/etc/mysql/mysql.conf.d/mysqld.cnf` and restarting the `mysql` and `apache2` services – Timmah Sep 26 '18 at 01:16
  • I realized that an 6 gb data dump required in equivalent amount of space in OS partition in windows to get imported in mysql – veritas May 16 '23 at 09:55
93

Another possible reason is the partition being full - this is just what happened to me now.

maaartinus
  • 44,714
  • 32
  • 161
  • 320
  • 1
    This should always be the first thing to check. Always go back to the power cord, I have stumbled across this many times. – Kyias Mar 09 '17 at 15:08
  • 1
    You saved me few hours trying to change the mysql configuration. The primary partition was full. Had to move mysql database to data partition and then create a soft link – Ganesh Krishnan Sep 02 '17 at 11:21
  • 3
    use `df -h` for check disk size – Amit Bera Nov 28 '19 at 11:00
  • This should always be the first thing to check. but usually when mysql give the error it deletes the data. and partition is not full. – Nick Sep 30 '20 at 20:13
46

DOCKER USERS: This also happens when you have hit around 90% of your Docker image size limit (seems like 10% is needed for caching or so). The wording is confusing, as this simply means the amount of disk space Docker can use for basically everything.

To fix, go to your Docker desktop settings > Disk > move slider a bit more to the right > Apply.

enter image description here

Sliq
  • 15,937
  • 27
  • 110
  • 143
  • 1
    That was a useful answer, I was running short on disk space and thought that was my issue, but it was indeed the virtual disk space setting that I had to worry about in the docker settings – delboy1978uk Feb 15 '23 at 15:54
27

You need to modify the limit cap set in my.cnf for the INNO_DB tables. This memory limit is not set for individual tables, it is set for all the tables combined.

If you want the memory to autoextend to 512MB

innodb_data_file_path = ibdata1:10M:autoextend:max:512M

If you don't know the limit or don't want to put a limit cap, you can modify it like this

innodb_data_file_path = ibdata1:10M:autoextend
Tarek
  • 3,810
  • 3
  • 36
  • 62
Daniel Luca CleanUnicorn
  • 1,087
  • 1
  • 12
  • 30
  • 1
    We hosted our ddbb on Amazon and it was configured with autoextend. But we had the same problem that I assume was due to reaching the configured stored limit – borjab Nov 04 '16 at 09:03
27

You will also get the same error ERROR 1114 (HY000): The table '#sql-310a_8867d7f' is full

if you try to add an index to a table that is using the storage engine MEMORY.

Green Card
  • 369
  • 4
  • 2
  • This happened to me but it seemed like my client was using wrong syntax. When adding the same index with a simple `ALTER TABLE my_table ADD INDEX my_index (column_a, column_b);` it worked. – thephper Dec 08 '19 at 00:23
14

This error also appears if the partition on which tmpdir resides fills up (due to an alter table or other

fimbulvetr
  • 742
  • 7
  • 8
12

In my case, this was because the partition hosting the ibdata1 file was full.

skiphoppy
  • 97,646
  • 72
  • 174
  • 218
11

You may be running out of space either in the partition where the mysql tables are stored (usually /var/lib/mysql) or in where the temporary tables are stored (usually /tmp).

You may want to: - monitor your free space during the index creation. - point the tmpdir MySQL variable to a different location. This requires a server restart.

Julio
  • 111
  • 1
  • 2
8

I too faced this error while importing an 8GB sql database file. Checked my mysql installation drive. There was no space left in the drive. So got some space by removing unwanted items and re-ran my database import command. This time it was successful.

Arun Kumar
  • 81
  • 1
  • 2
7

we had: SQLSTATE[HY000]: General error: 1114 The table 'catalog_product_index_price_bundle_sel_tmp' is full

solved by:

edit config of db:

nano /etc/my.cnf

tmp_table_size=256M max_heap_table_size=256M

  • restart db
Sition
  • 81
  • 1
  • 1
  • 2
    Those 512M settings are dangerous. They control the max memory size for temp tables in complex selects. It is not just "per connection", but also "per tmp table". So, those values could easily cause you to run out of RAM. – Rick James Jul 29 '16 at 22:29
7

in my case, it just because the mysql server runs together with an application , who write too many logs that the disk is full.

you can check if the disk has enough space use

df -h

if the disk usage percentage is 100%, you can use this command to find which directory is too large

du -h -d 1 /
Kai
  • 679
  • 9
  • 11
7

Unless you enabled innodb_file_per_table option, InnoDB keeps all data in one file, usually called ibdata1.

Check the size of that file and check you have enough disk space in the drive it resides on.

Quassnoi
  • 413,100
  • 91
  • 616
  • 614
7

If you use NDBCLUSTER as storage engine, you should increase DataMemory and IndexMemory.

Mysql FQA

Emil Davtyan
  • 13,808
  • 5
  • 44
  • 66
metdos
  • 13,411
  • 17
  • 77
  • 120
5

This disk is full at /var/www/mysql

wukong
  • 2,430
  • 2
  • 26
  • 33
4

To quote the MySQL Documents.

The InnoDB storage engine maintains InnoDB tables within a tablespace that can be created from several files. This allows a table to exceed the maximum individual file size. The tablespace can include raw disk partitions, which allows extremely large tables. The maximum tablespace size is 64TB.

If you are using InnoDB tables and run out of room in the InnoDB tablespace. In this case, the solution is to extend the InnoDB tablespace. See Section 13.2.5, [“Adding, Removing, or Resizing InnoDB Data and Log Files”.]

Community
  • 1
  • 1
Ólafur Waage
  • 68,817
  • 22
  • 142
  • 198
3

For those of you whose issues still remain when trying increasing any of the various memory limits: by setting internal_tmp_mem_storage_engine=MEMORY solved the issue for me.

I'm on Ubuntu 20.04.2, using MySQL 8.0.25-0ubuntu0.20.04.1.

Johan Dettmar
  • 27,968
  • 5
  • 31
  • 28
2

On CentOS 7 simply stopping and starting the MySQL service fixed this for me.

sudo service mysql stop

sudo service mysql start

crmpicco
  • 16,605
  • 26
  • 134
  • 210
  • Strangely this worked for me as well.... Didn't have any partitions that were greater than 80% full and just restarting fixed it. – n0nag0n Nov 04 '19 at 16:53
2

I faced same problem because of low disk space. And partition which is hosting the ibdata1 file which is the system tablespace for the InnoDB infrastructure was full.

Saveendra Ekanayake
  • 3,153
  • 6
  • 34
  • 44
2

I was experiencing this issue... in my case, I'd run out of storage on my dedicated server. Check that if everything else fails and consider increasing disk space or removing unwanted data or files.

NotJay
  • 3,919
  • 5
  • 38
  • 62
2

In my case, I was trying to run an alter table command and the available disk space was less than the size of table. Once, I increased the disk space the problem went away.

Pratik Singhal
  • 6,283
  • 10
  • 55
  • 97
  • im facing the same issue while altering table. but i dont have much space in vm. what should i do? can i do this from other vm? – Mujeebcpy Aug 25 '21 at 06:26
1

In my case the server memory was full so the DB could not write the temp data. To solve it you just have to make some place on your drive.

GPY
  • 3,832
  • 1
  • 16
  • 11
1

I fixed this problem by increasing the amount of memory available to the vagrant VM where the database was located.

yvoloshin
  • 378
  • 6
  • 18
0

This could also be the InnoDB limit for the number of open transactions:

http://bugs.mysql.com/bug.php?id=26590

at 1024 transactions, that have undo records (as in, edited any data), InnoDB will fail to work