72

I recently read about SQLite and thought I would give it a try. When I insert one record it performs okay. But when I insert one hundred it takes five seconds, and as the record count increases so does the time. What could be wrong? I am using the SQLite Wrapper (system.data.SQlite):

dbcon = new SQLiteConnection(connectionString);
dbcon.Open();

//---INSIDE LOOP

 SQLiteCommand sqlComm = new SQLiteCommand(sqlQuery, dbcon);

 nRowUpdatedCount = sqlComm.ExecuteNonQuery(); 

//---END LOOP

dbcon.close();
mergenchik
  • 1,129
  • 16
  • 27
Verve Innovation
  • 2,006
  • 6
  • 29
  • 48
  • For anyone reading this who uses a cloud sync folder like Dropbox, I was having extremely slow INSERTs using sqlAlchemy (~1.5s per entry!) into my sqlite DB...which was located in a Dropbox sync folder on my C drive. I paused Dropbox sync and the problem is fixed. Now, ~100k entries take <2sec. Dropbox sync must be locking the DB during INSERTs. All good now; maybe this also help someone else. – Paul Aug 14 '22 at 10:24

4 Answers4

95

Wrap BEGIN \ END statements around your bulk inserts. Sqlite is optimized for transactions.

dbcon = new SQLiteConnection(connectionString);
dbcon.Open();

SQLiteCommand sqlComm;
sqlComm = new SQLiteCommand("begin", dbcon);
sqlComm.ExecuteNonQuery(); 
//---INSIDE LOOP

 sqlComm = new SQLiteCommand(sqlQuery, dbcon);

 nRowUpdatedCount = sqlComm.ExecuteNonQuery(); 

//---END LOOP
sqlComm = new SQLiteCommand("end", dbcon);
sqlComm.ExecuteNonQuery(); 
dbcon.close();
tidwall
  • 6,881
  • 2
  • 36
  • 47
  • 11
    +1 This is mentioned in the [SQLite FAQ, #19](http://www.sqlite.org/faq.html#q19) - when you're doing this without a begin/end, SQLite is creating a transaction for each insert. – Jared Harley Oct 04 '10 at 00:17
  • 1
    why you have used 3 ExecuteNonQuery where one can do the job – Verve Innovation Oct 04 '10 at 00:38
  • 4
    3 `ExecuteNonQuery`'s because 1 for the `BEGIN`, 1 (or more) for each `INSERT` and 1 for the `END`. Unless you added all your SQL statements to one string (delimited by semicolons), you need multiple `ExecuteNonQuery` calls. – tidwall Oct 04 '10 at 02:02
  • I tried this in my code and its just saying "no transaction is active", any idea on why this would happen? – Brian Tacker Nov 23 '11 at 22:59
  • @Brian Make sure that you have one BEGIN and one END for each connection. Also do not nest the transactions. Only one per connection. If you need to nest, then either open another connection or use [SAVEPOINTS](http://www.sqlite.org/lang_transaction.html). – tidwall Nov 24 '11 at 17:16
  • I am just using one connection and no nesting. – Brian Tacker Nov 25 '11 at 19:52
  • This made the difference between 15 queries per second and 4000 queries per second. Thanks! – Luc Feb 07 '17 at 17:59
  • You can wrap this outside your loop instead ```using (SQLiteTransaction DBTransaction = dbcon.BeginTransaction())``` – Alexander Aug 09 '18 at 14:33
  • 1
    Wow! This makes a huge difference. I was doing about 65k inserts at a rate of maybe 2-3 per second. Was taking forever to import my data. Putting a transaction break every 1000 INSERTs sped it up to about 1000 per second. My import was done in about a minute. Wrapping the whole 65k INSERTS up in one big transaction took a few seconds. The commit was nearly instantaneous. Impressive difference. – Jim Carnicelli Mar 13 '22 at 21:34
51

I read everywhere that creating transactions is the solution to slow SQLite writes, but it can be long and painful to rewrite your code and wrap all your SQLite writes in transactions.

I found a much simpler, safe and very efficient method: I enable a (disabled by default) SQLite 3.7.0 optimisation : the Write-Ahead-Log (WAL). The documentation says it works in all unix (i.e. Linux and OSX) and Windows systems.

How ? Just run the following commands after initializing your SQLite connection:

PRAGMA journal_mode = WAL
PRAGMA synchronous = NORMAL

My code now runs ~600% faster : my test suite now runs in 38 seconds instead of 4 minutes :)

david_p
  • 5,722
  • 1
  • 32
  • 26
  • 3
    Thanks! BTW, you probably can use in-memory sqlite mode for tests. – gavv Apr 29 '16 at 14:40
  • 1
    this is probably the best solution if you have multiple threads saving data and don't want to do a lot of code changes in order to group all insert/updates into 1 single call – cahen Mar 14 '17 at 17:24
  • 1
    You saved my day :)) Thanks! 1000 times faster! – yo3hcv Oct 29 '18 at 08:45
  • I was seeing a more than 10x performance difference between the same insert-intensive program in MacOS and Ubuntu. These lines made Ubuntu perform like MacOS. Thanks. – Marcelo Lazaroni May 02 '20 at 01:26
36

Try wrapping all of your inserts (aka, a bulk insert) into a single transaction:

string insertString = "INSERT INTO [TableName] ([ColumnName]) Values (@value)";

SQLiteCommand command = new SQLiteCommand();
command.Parameters.AddWithValue("@value", value);
command.CommandText = insertString;
command.Connection = dbConnection;
SQLiteTransaction transaction = dbConnection.BeginTransaction();
try
{
    //---INSIDE LOOP
    SQLiteCommand sqlComm = new SQLiteCommand(sqlQuery, dbcon);
    nRowUpdatedCount = sqlComm.ExecuteNonQuery(); 
    //---END LOOP
    
    transaction.Commit();
    return true;
}
catch (SQLiteException ex)
{
    transaction.Rollback();
}

By default, SQLite wraps every inserts in a transaction, which slows down the process:

INSERT is really slow - I can only do few dozen INSERTs per second

Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second.

Transaction speed is limited by disk drive speed because (by default) SQLite actually waits until the data really is safely stored on the disk surface before the transaction is complete. That way, if you suddenly lose power or if your OS crashes, your data is still safe. For details, read about atomic commit in SQLite..

By default, each INSERT statement is its own transaction. But if you surround multiple INSERT statements with BEGIN...COMMIT then all the inserts are grouped into a single transaction. The time needed to commit the transaction is amortized over all the enclosed insert statements and so the time per insert statement is greatly reduced.

Community
  • 1
  • 1
Jared Harley
  • 8,219
  • 4
  • 39
  • 48
9

See "Optimizing SQL Queries" in the ADO.NET help file SQLite.NET.chm. Code from that page:

using (SQLiteTransaction mytransaction = myconnection.BeginTransaction())
{
  using (SQLiteCommand mycommand = new SQLiteCommand(myconnection))
  {
    SQLiteParameter myparam = new SQLiteParameter();
    int n;

    mycommand.CommandText = "INSERT INTO [MyTable] ([MyId]) VALUES(?)";
    mycommand.Parameters.Add(myparam);

    for (n = 0; n < 100000; n ++)
    {
      myparam.Value = n + 1;
      mycommand.ExecuteNonQuery();
    }
  }
  mytransaction.Commit();
}
Asherah
  • 18,948
  • 5
  • 53
  • 72
Scott
  • 109
  • 1
  • 1