What is the upper limit of records for MySQL database table. I'm wondering about autoincrement field. What would happen if I add milions of records? How to handle this kind of situations? Thx!
-
2At least if memory serves, the limit is set by the storage engine, so (for example) using MyISAM you get a different limit from using InnoDB. – Jerry Coffin Apr 26 '10 at 19:35
-
84@Onion-Knight: I don't agree. It's normal to insert millions of *rows* in a single table, and some databases do have a limit, so it's worth asking. If one asks if MySQL supports millions of *tables* then that's probably a sign of an architectural blunder. – Bill Karwin Apr 26 '10 at 20:08
8 Answers
The greatest value of an integer has little to do with the maximum number of rows you can store in a table.
It's true that if you use an int or bigint as your primary key, you can only have as many rows as the number of unique values in the data type of your primary key, but you don't have to make your primary key an integer, you could make it a CHAR(100). You could also declare the primary key over more than one column.
There are other constraints on table size besides number of rows. For instance you could use an operating system that has a file size limitation. Or you could have a 300GB hard drive that can store only 300 million rows if each row is 1KB in size.
The limits of database size is really high:
http://dev.mysql.com/doc/refman/5.1/en/source-configuration-options.html
The MyISAM storage engine supports 232 rows per table, but you can build MySQL with the --with-big-tables
option to make it support up to 264 rows per table.
http://dev.mysql.com/doc/refman/5.1/en/innodb-restrictions.html
The InnoDB storage engine has an internal 6-byte row ID per table, so there are a maximum number of rows equal to 248 or 281,474,976,710,656.
An InnoDB tablespace also has a limit on table size of 64 terabytes. How many rows fits into this depends on the size of each row.
The 64TB limit assumes the default page size of 16KB. You can increase the page size, and therefore increase the tablespace up to 256TB. But I think you'd find other performance factors make this inadvisable long before you grow a table to that size.

- 538,548
- 86
- 673
- 828
-
70crap - i wish I would have read this before... i just surpassed my 64 terrabyte size on one of my tables and now my system is so slow! – JM4 Nov 30 '10 at 01:00
-
2^32 = 4,294,967,295 and 2^64 = 18,446,744,073,709,551,615 so... The greatest integer value does have a bit to do with the maximum number of rows. Not necessarily the primary key. – teynon Jan 14 '13 at 21:35
-
-
MyISAM is irrelevant for what reason? Considering that MyISAM is the default and many users don't even realize that there are different database options, I'd say a large percentage use it meaning it has a pretty significant relevance to this question. – teynon Jan 15 '13 at 01:29
-
1@Tom, InnoDB is the default storage engine in MySQL 5.5, and is the better choice in 99% of cases. – Bill Karwin Jan 15 '13 at 01:51
-
There are still several classic applications which make heavy use of full-text indexes, the equivalent in InnoDB was only added in MySQL 5.6.4. http://dev.mysql.com/doc/refman/5.6/en/innodb-table-and-index.html#innodb-fulltext-index – Ext3h Jan 30 '14 at 19:38
-
2@Ext3h, [Sphinx Search](http://sphinxsearch.com/) is usually a better choice than fulltext indexes in either MyISAM or InnoDB. – Bill Karwin Jan 31 '14 at 02:57
-
1
-
If the number of rows limit of a MySQL table is well-defined and is before a `bigint` value of let's say `18,000,000,000,000,000,000`, what is the purpose of `bigint` then? Is is because they can be used in sharded architectures in order to uniquely identify an entity? Thank you! – tonix Oct 26 '20 at 14:43
-
1@tonix The practical answer is that a 64-bit integer is easy to store in the long integer type in C++ code. Besides that, there are uses of integers besides primary keys. – Bill Karwin Oct 26 '20 at 14:51
-
1@JM4 Your comment above was irony in 2010, but today in 2021 my coworkers are actually requesting Amazon expand the size limit of a cloud instance from 64TiB to 256TiB. What a difference 11 years makes! – Bill Karwin Jun 08 '21 at 17:32
-
This is way better than the answer that tells you the values of 2^32 and 2^64, but you can need a bigint key without having any records. If you have bigint autoincrement key, you can insert and rollback more than 2^32 times, and have no records. Rolled back inserts still consume an ID. – doug65536 Apr 11 '22 at 08:20
-
@doug65536 You could also start a bigint unsigned auto-increment at 2^64-2, then insert and commit 2 rows, and you won't be able to insert any more rows using auto-increment. But you could still insert more rows to the table if you set id with lesser values explicitly, bypassing the auto-increment generator. This question is not about the maximum value of an auto-increment; it is about the maximum number of rows in a table. – Bill Karwin Apr 11 '22 at 12:51
mysql int types can do quite a few rows: http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html
unsigned int
largest value is 4,294,967,295
unsigned bigint
largest value is 18,446,744,073,709,551,615

- 101,727
- 34
- 178
- 212
-
82147483647 max, so you only have to make autoincrement bigint if you're working with multiple billions of entries? (which would probably just make your select statements melt down long before then) – Kzqai Apr 26 '10 at 19:35
-
2@Tchalvak that's for signed int, please read mysql documentation. – Leandro Bardelli Jul 19 '12 at 21:10
-
10**The context of the question is about if the auto increment field can handle a lot of rows, and not the limitations of other resources** – KM. May 15 '13 at 12:45
-
27The poster is not asking about numeric or any other data types. . I really do not understand how this can be flagged as a correct answer. Though I must admit that the question is ambiguous we should distinguish between PK data type and maximum number of rows for a table. – Bery Jan 15 '15 at 10:20
-
1@Bery, the OP has distinguished what they were after by selecting this as their answer. Apparently they were interested in the capacity of the auto increment field, which my answer covers, and not the limitations of other resources. – KM. Jun 05 '15 at 19:47
-
1Still - the question is about table record limit and not data type maximum value as "unsigned bigint" maximum does not need to be equal to maximum record count. In other words - changing your PK to bigint may not give grant you expected number of records. – Bery Jun 23 '15 at 14:27
-
@Bery, how can you say what the question is about? The question isn't completely clear, but only the OP knows what they were after. I would say that by the OP selecting this answer, then I answered what they were after. – KM. Jul 20 '15 at 18:11
-
1Hey, Please look at @Bill Karwin answer before moving away from this page. – Ankit Arora Dec 04 '18 at 17:37
-
@KM.It doesn't matter what has been upvoted by the OP, it's still not the answer to the question. Bery It's very clear, the very first sentence is concrete and clear. – John Stock May 27 '20 at 17:03
-
@KM Search engines think the question is about the maximum number of records in a mysql table. – doug65536 Apr 11 '22 at 08:14
I suggest, never delete data. Don't say if the tables is longer than 1000 truncate the end of the table. There needs to be real business logic in your plan like how long has this user been inactive. For example, if it is longer than 1 year then put them in a different table. You would have this happen weekly or monthly in a maintenance script in the middle of a slow time.
When you run into to many rows in your table then you should start sharding the tables or partitioning and put old data in old tables by year such as users_2011_jan, users_2011_feb or use numbers for the month. Then change your programming to work with this model. Maybe make a new table with less information to summarize the data in less columns and then only refer to the bigger partitioned tables when you need more information such as when the user is viewing their profile. All of this should be considered very carefully so in the future it isn't too expensive to re-factor. You could also put only the users which comes to your site all the time in one table and the users that never come in an archived set of tables.
-
1In this regard, it is very useful to look at MySQL partitioning: https://dev.mysql.com/doc/refman/5.6/en/partitioning.html – Wim Deblauwe Apr 14 '15 at 12:00
In InnoDB, with a limit on table size of 64 terabytes and a MySQL row-size limit of 65,535 there can be 1,073,741,824 rows. That would be minimum number of records utilizing maximum row-size limit. However, more records can be added if the row size is smaller .

- 344
- 3
- 8
-
1for storing this many (1,073,741,824) rows with row-limit of 65535, how much hard disk size required ? please suggest – davidb Nov 15 '17 at 09:08
-
2The hard disk size required cannot be determined based on the number of rows and row-size alone. The table size itself will be 64 terabytes. However, TEXT and BLOB columns' data are stored separate from the row and will need additional space. In addition, it will depend on the number and type of the TEXT and BLOB columns as the size varies depending on the type. There are four types of TEXT columns namely, TINYTEXT, TEXT, MEDIUMTEXT, LONGTEXT. There are also four types of BLOB columns namely, TINYBLOB, MEDIUMBLOB, BLOB and LONGBLOB. – Xylo Nov 16 '17 at 08:50
According to Scalability and Limits section in http://dev.mysql.com/doc/refman/5.6/en/features.html, MySQL support for large databases. They use MySQL Server with databases that contain 50 million records. Some users use MySQL Server with 200,000 tables and about 5,000,000,000 rows.

- 408
- 4
- 10
-
it could help if you also let us know what kinda hardware "They" were using – my account_ram Sep 01 '13 at 12:50
-
Indeed, you right. But unfortunately 'they' didn't anything about the hardware – Data Sep 30 '13 at 08:33
-
3@myaccount_ram sorry to necromance this, but if its helpful i have seen the less theoretical, more practical limits of production MySQL in action. I have seen a database that is ~18 billion rows on 2x db.r4.16xlarge AWS instances (1 reader, 1 writer). Each of the machines had 64 CPU cores, 488GB of ram, 25Gbps network link, 64TB of disk. This scale of db was pushing both CPU and disk size limits and AWS does not provide any larger DB optimized instances. It was replaced with a simpler db schema that didn't require as many rows. – Skylar Brown Jul 13 '20 at 23:59
The maximum row size for a given table is determined by several factors:
The internal representation of a MySQL table has a maximum row size limit of 65,535 bytes, even if the storage engine is capable of supporting larger rows. BLOB and TEXT columns only contribute 9 to 12 bytes toward the row size limit because their contents are stored separately from the rest of the row.
The maximum row size for an InnoDB table, which applies to data stored locally within a database page, is slightly less than half a page. For example, the maximum row size is slightly less than 8KB for the default 16KB InnoDB page size, which is defined by the innodb_page_size configuration option. “Limits on InnoDB Tables”.
- If a row containing variable-length columns exceeds the InnoDB maximum row size, InnoDB selects variable-length columns for external off-page storage until the row fits within the InnoDB row size limit. The amount of data stored locally for variable-length columns that are stored off-page differs by row format. For more information, see “InnoDB Row Storage and Row Formats”.
- Different storage formats use different amounts of page header and trailer data, which affects the amount of storage available for rows.

- 12,712
- 6
- 88
- 78
Link http://dev.mysql.com/doc/refman/5.7/en/column-count-limit.html
Row Size Limits
The maximum row size for a given table is determined by several factors:
The internal representation of a MySQL table has a maximum row size limit of 65,535 bytes, even if the storage engine is capable of supporting larger rows. BLOB and TEXT columns only contribute 9 to 12 bytes toward the row size limit because their contents are stored separately from the rest of the row.
The maximum row size for an InnoDB table, which applies to data stored locally within a database page, is slightly less than half a page for 4KB, 8KB, 16KB, and 32KB innodb_page_size settings. For example, the maximum row size is slightly less than 8KB for the default 16KB InnoDB page size. For 64KB pages, the maximum row size is slightly less than 16KB. See Section 15.8.8, “Limits on InnoDB Tables”.
If a row containing variable-length columns exceeds the InnoDB maximum row size, InnoDB selects variable-length columns for external off-page storage until the row fits within the InnoDB row size limit. The amount of data stored locally for variable-length columns that are stored off-page differs by row format. For more information, see Section 15.11, “InnoDB Row Storage and Row Formats”.
Different storage formats use different amounts of page header and trailer data, which affects the amount of storage available for rows.
For information about InnoDB row formats, see Section 15.11, “InnoDB Row Storage and Row Formats”, and Section 15.8.3, “Physical Row Structure of InnoDB Tables”.
For information about MyISAM storage formats, see Section 16.2.3, “MyISAM Table Storage Formats”.
http://dev.mysql.com/doc/refman/5.7/en/innodb-restrictions.html

- 363
- 4
- 13
There is no limit. It only depends on your free memory and system maximum file size. But that doesn't mean you shouldn't take precautionary measure in tackling memory usage in your database. Always create a script that can delete rows that are out of use or that will keep total no of rows within a particular figure, say a thousand.

- 37
- 1
-
9Deleting rows that you think are 'out of use' is dangerous, and causes way more issues than it solves. A previous developer on one of my projects implemented a script that deleted shopping carts more than three days old, thinking he was doing the right thing. Guess what, it causes issues weekly. Only delete data if you really need to. – Ben Hitchcock Jan 07 '14 at 00:37
-
a worse case is when some one starts storing file paths in a database... hay where did all my project files go.... i have a small project that starts at 3.5M files guess what... they aren't all used frequently. – Kendrick Apr 10 '15 at 00:17