4

Possible Duplicate:
What are the performance characteristics of sqlite with very large database files?

I want to create a .Net application that uses a database that will contain around 700 million records in one of its tables. I wonder if the performance of SQLite would satisfy this scenario or should I use SQL Server. I like the portability that SQLite gives me.

Community
  • 1
  • 1
Alireza Noori
  • 14,961
  • 30
  • 95
  • 179

4 Answers4

4

Go for SQL Server for sure. 700 million records in SQLite is too much.

With SQLite you have following limitation

  • Single process write.
  • No mirroring
  • No replication

Check out this thread: What are the performance characteristics of sqlite with very large database files?

Community
  • 1
  • 1
Habib
  • 219,104
  • 29
  • 407
  • 436
2

SQLite SHOULD be able to handle this much data. However, you may have to configure it to allow it to grow to this size, and you shouldn't have this much data in an "in-memory" instance of SQLite, just on general principles.

For more detail, see this page which explains the practical limits of the SQLite engine. The relevant config settings are the page size (normally 64KB) and page count (up to a 64-bit int's max value of approx 2.1 billion). Do the math, and the entire database can take up more than 140TB. A database consisting of a single table with 700m rows would be on the order of tens of gigs; easily manageable.

However, just because SQLite CAN store that much data doesn't mean you SHOULD. The biggest drawback of SQLite for large datastores is that the SQLite code runs as part of your process, using the thread on which it's called and taking up memory in your sandbox. You don't get the tools that are available in server-oriented DBMSes to "divide and conquer" large queries or datastores, like replication/clustering. In dealing with a large table like this, insertion/deletion will take a very long time to put it in the right place and update all the indexes. Selection MAY be livable, but only in indexed queries; a page or table scan will absolutely kill you.

KeithS
  • 70,210
  • 21
  • 112
  • 164
2

700m is a lot.

To give you an idea. Let's say your record size was 4 bytes (essentially storing a single value), then your DB is going to be over 2GB. If your record size is something closer to 100 bytes then it's closer to 65GB... (that's not including space used by indexes, and transaction log files, etc).

We do a lot of work with large databases and I'd never consider SQLLite for anything of that size. Quite frankly, "Portability" is the least of your concerns here. In order to query a DB of that size with any sort of responsiveness you will need an appropriately sized database server. I'd start with 32GB of RAM and fast drives.

If it's write heavy 90%+, you might get away with smaller RAM. If it's read heavy then you will want to try and build it out so that the machine can load as much of the DB (or at least indexes) in RAM as possible. Otherwise you'll be dependent on disk spindle speeds.

NotMe
  • 87,343
  • 27
  • 171
  • 245
0

I've had tables with similar record counts and no problems retrieval wise.

For starters, the hardware and allocation to the server is where you can start. See this for examples: http://www.sqlservercentral.com/blogs/glennberry/2009/10/29/suggested-max-memory-settings-for-sql-server-2005_2F00_2008/

Regardless of size or number of records as long as you:

  • create indexes on foreign key(s),
  • store common queries in Views (http://en.wikipedia.org/wiki/View_%28database%29),
  • and maintain the database and tables regularly

you should be fine. Also, setting the proper column type/size for each column will help.

Jake Toolson
  • 480
  • 3
  • 8