1

I know about SQLite's 'problem' when inserting/updating many rows, but that's not the case here.

I'm updating ONE field in ONE row, indexed by PK, in a table with ~ 250 records. The query always takes ~ 200 ms. That sounds like very little, but it's huge.

Why does 1 very simple UPDATE query take 200 ms?? All reads are blazing fast.

I've tried:

  • BEGIN and COMMIT -- no change, because it's just 1 statement
  • PRAGMA journal_mode=PERSIST -- no change, apparently disk io isn't the problem?
  • removing the UPDATE statement -- that works wonderfully for time!, but it's not very persistent

To compare to MySQL on the same system: 0.6ms in a very similar database.

I don't need transactional security (ACID?) or whatever you call that. If the computer crashes during this query, I'm fine with losing all changes. MySQL (InnoDB) has an option for this: innodb_flush_log_at_trx_commit. Does SQLite have something like that?

I'm using sqlite-3.7.9, if that matters.

Rudie
  • 52,220
  • 42
  • 131
  • 173
  • Have you tried setting the [VACUUM](http://www.sqlite.org/lang_vacuum.html) parameters? – ldav1s Feb 05 '14 at 23:35
  • `VACUUM` would rebuild the db? But it's a tiny db (270 kb). The updated field isn't indexed. How would I execute a VACUUM, just to try? – Rudie Feb 05 '14 at 23:39
  • I did a VACUUM and now the db file is slightly smaller =) That's it. Without below answer it's still a very slow `UPDATE`. – Rudie Feb 05 '14 at 23:42
  • Just `VACUUM` in SQL. It may have an impact on update time, but may not. There's also a `PRAGMA` to auto vacuum. – ldav1s Feb 05 '14 at 23:43

4 Answers4

8

Yes, SQLite has an option like MySQL's innodb_flush_log_at_trx_commit:

PRAGMA synchronous=OFF

and it works like a charm. No ACID, yes speed. For some incredible reason the UPDATE now takes < 1ms.

There is also improving the journal_mode:

PRAGMA journal_mode=MEMORY
or
PRAGMA journal_mode=OFF

Both are very fast and not ACID. Rollback isn't an issue, so both are good in this case. OFF is the fastest, because it doesn't create a journal at all (?).

Rudie
  • 52,220
  • 42
  • 131
  • 173
  • 1
    Be careful when using `synchronous=OFF` or `journal_mode=OFF`, since it can [corrupt a database](https://sqlite.org/howtocorrupt.html). – Guilherme Salomé Aug 27 '17 at 22:56
0

SQLite is good option handle lightweight dataset. Yes, it is much more slower than any database while inserting/updating data. One can speed up these operations by committing queries by oneself. Please go through demo code below. I have referred JAVA code with JDBCTemplate Spring framework to perform my database operations. Please handle the required exceptions using try-catch bolcks

conn = DataSourceUtils.getConnection(jdbcTemplate.getDataSource());
conn.setAutoCommit(false);
PreparedStatement stmt = conn.prepareStatement(query_string);
for(Object[] temp:argsListForInsertQuery)
{
    stmt.setString(1, (String)temp[0]);
    stmt.setString(2, (String)temp[1]);
    stmt.setString(3, (String)temp[2]);
    stmt.setString(4, (String)temp[3]);
    stmt.addBatch();
 }
 stmt.executeBatch();
 conn.commit();
 conn.setAutoCommit(true);
 conn.close();
tshepang
  • 12,111
  • 21
  • 91
  • 136
gaurav14
  • 1
  • 1
  • I know that one. The SQLite website talks about its insert speed WITH transactions. My use case had only 1 query though. Or several unrelated. (Shouldn't `conn` have a type?) – Rudie Oct 13 '14 at 16:55
0

In my case I had a couple of foreign key constraints referencing that primary key from other (pretty large) tables. So, for each update, SQLite had to seq scan all of them to find referencing rows and to make sure those constraints are satisfied.

I suggest you create an index for each foreign key constraint on it's source column(s). That way on each update SQLite will be able to use these indices to check foreign key constrains much faster.

In my case updating my PK was redundant (byproduct of the algorithm), so I did away with it and updates became many times faster. If you have a statement like one below, you are (probably) just wasting resources for useless checks:

UPDATE table SET id=69, ... WHERE id=69;
0

With PRAGMA synchronous = OFF one risks database corruption in case of e.g. power failure.

A way to achieve similar performance in a safer way is to set the database in WAL mode, disable auto checkpointing and use normal synchronisation:

PRAGMA journal_mode = WAL;
PRAGMA wal_autocheckpoint = 0;
PRAGMA synchronous = NORMAL;

In WAL mode, writes to the database will go into a separate wal file. With PRAGMA synchronous = NORMAL; sqlite will not use fsync() during such writes, which is equivalent to PRAGMA synchronous = OFF;. By default, after each 1000 pages written to the wal, sqlite will automatically sync the wal with the main database file -- called checkpointing -- using fsync(). You disable automatic checkpoint with PRAGMA wal_autocheckpoint = 0;. Then, during idle time or at the end of the job, you manually sync the wal with the main database using pragma wal_checkpoint(TRUNCATE);.

Hence during normal operation in the above mode, you sacrifice durability for speed in the sense that you can lose information from the wal that is made after a checkpoint (e.g. due to a power failure), but your main database remains intact. And periodic checkpointing with pragma wal_checkpoint(TRUNCATE); happens in a safe way.

Davor Josipovic
  • 5,296
  • 1
  • 39
  • 57
  • Manually checkpointing is unacceptable for me. The only manual breaks/blocks I want is transactions, because every app and db has those. What if `wal_autocheckpoint` is still ON, is there still a perf benefit from `journal_mode = WAL` instead of `OFF`? – Rudie Jan 21 '22 at 16:45
  • I don't think there is. `journal_mode = OFF` is the fastest you can get but you risk db corruption even with a mere application crash. All the others keep a rollback journal somewhere, hence need to write data twice. `WAL` can perform as fast because you can postpone the second write (= checkpointing). Note also that checkpointing will not block any transactions if implemented in it's own process/thread. The only downside is that checkpointing will eat disk I/O, hence if you read/write at the same time, it will perform slower. (I know your case is specific and my answer on the "safe" side.) – Davor Josipovic Jan 21 '22 at 18:57