0

we have recently moved to plain vanilla sqlite3 from sql anywhere database. In our application, we use sqlite db to maintain client side meta data information. The db instance can be used by one client installation only. we are noticing huge performance impact wrt DB calls. As we keep writing into the DB, the operations become hopelessly slow though the DB file does not grow beyond 10 MB or so.

Our application connects to DB via ADO layer and use to work well with SQL Anywhere. After a threshold, all read and write operations become expensive.

we have tried with following PRAGMA options but to minimum effect:

synchronous=OFF

journal_mode=OFF;

cache_size=10000;

temp_store=2;

read_uncommitted=True;

count_changes=OFF;

Please suggest.

Jaap
  • 81,064
  • 34
  • 182
  • 193
  • This might be useful: http://stackoverflow.com/questions/1711631/improve-insert-per-second-performance-of-sqlite – jason.kaisersmith Feb 24 '17 at 10:39
  • And this: http://stackoverflow.com/questions/784173/what-are-the-performance-characteristics-of-sqlite-with-very-large-database-file – jason.kaisersmith Feb 24 '17 at 10:39
  • And a blog: http://blog.devart.com/increasing-sqlite-performance.html – jason.kaisersmith Feb 24 '17 at 10:40
  • Your problem is probably not the write speed itself (and your settings are unsafe; don't use them) by the actual queries you're using. Show them. – CL. Feb 24 '17 at 11:12
  • Possible duplicate of [What are the performance characteristics of sqlite with very large database files?](http://stackoverflow.com/questions/784173/what-are-the-performance-characteristics-of-sqlite-with-very-large-database-file) – codeepic Feb 24 '17 at 13:42

0 Answers0