34

Give an simple sqlite3 table (create table data (key PRIMARY KEY,value)) with key size of 256 bytes and value size of 4096 bytes, what is the limit (ignoring disk space limits) on the maximum number of rows in this sqlite3 table? Are their limits associated with OS (win32, linux or Mac)

Vivek S.
  • 19,945
  • 7
  • 68
  • 85
volatilevoid
  • 12,605
  • 4
  • 21
  • 16

7 Answers7

33

As of Jan 2017 the sqlite3 limits page defines the practical limits to this question based on the maximum size of the database which is 140 terabytes:

Maximum Number Of Rows In A Table

The theoretical maximum number of rows in a table is 2^64 (18446744073709551616 or about 1.8e+19). This limit is unreachable since the maximum database size of 140 terabytes will be reached first. A 140 terabytes database can hold no more than approximately 1e+13 rows, and then only if there are no indices and if each row contains very little data.

So with a max database size of 140 terabytes you'd be lucky to get ~1 Trillion rows since if you actually had a useful table with data in it the number of rows would be constrained by the size of the data. You could probably have up to 10s of billions of rows in a 140 TB database.

aculich
  • 14,545
  • 9
  • 64
  • 71
  • 3
    Was just reading the documentation ... 1.4e+14 bytes (140 terabytes), but the real size limit is the underlying operating system's file size limits, which is much lower. – BonanzaDriver Apr 09 '14 at 20:51
  • 2
    @aculich, This is a theoretical answer that has no use **in practice**. For a practical answer that practical programmers writing practical applications would practically want to know in practice, please read http://stackoverflow.com/q/784173/632951 – Pacerier Jan 30 '17 at 11:37
  • @BonanzaDriver depends on the fs. ext4 max: ~17TB. NTFS max: 8000TB. BTRFS and ZFS max: 18.4 MILLION terabytes. XFS and APFS and HFS+: 9.2 MILLION terabytes. – hanshenrik Oct 31 '21 at 09:57
29

I have SQLite database 3.3 GB in size with 25million rows of stored numeric logs and doing calculations on them, it is working fast and well.

John
  • 291
  • 3
  • 2
18

In SQLite3 the field size isn't fixed. The engine will commit as much space as needed for each cell.

For the file limits see this SO question:
What are the performance characteristics of sqlite with very large database files?

Community
  • 1
  • 1
Nick Dandoulakis
  • 42,588
  • 16
  • 104
  • 136
13

I have a 7.5GB SQLite database which stores 10.5 million rows. Querying is fast as long as you have correct indexes. To get the inserts to run quickly, you should use transactions. Also, I found it's better to create the indexes after all rows have been inserted. Otherwise the insert speed is quite slow.

Fidel
  • 7,027
  • 11
  • 57
  • 81
4

The answer you want is right here.

Each OS you mentioned supports multiple file system types. The actual limits will be per-filesystem, not per-OS. It's difficult to summarize the constraint matrix on SO, but while some file systems impose limits on file sizes, all major OS kernels today support a file system with extremely large files.

The maximum page size of an sqlite3 db is quite large, 2^32768, although this requires some configuration. I presume an index must specify a page number but the result is likely to be that an OS or environment limit is reached first.

DigitalRoss
  • 143,651
  • 25
  • 248
  • 329
2

Essentially no real limits

see http://www.sqlite.org/limits.html for details

Martin Beckett
  • 94,801
  • 28
  • 188
  • 263
  • 3
    The practical limits for rows [have been updated](http://stackoverflow.com/a/9456705/462302) to be more well-defined. Even when they weren't well-defined there were always still real limits even though "SQLite was originally designed with a policy of avoiding arbitrary limits ... Unfortunately, the no-limits policy has been shown to create problems. Because the upper bounds were not well defined, they were not tested, and bugs (including possible security exploits) were often found when pushing SQLite to extremes." – aculich Feb 26 '12 at 20:15
0

No limits, but basically after a certain point the sqlite database will become useless. PostgreSQL is the top free database BY FAR for huge databases. In my case, it is about 1 million rows on my Linux 64-Bit Quad Core Dual Processor computer with 8GB RAM and Raptor hard disks. PostgreSQL is unbeatable, even by a tuned MySQL database. (Posted in 2011).

Erick
  • 369
  • 1
  • 3
  • 4