10

I'm writing an application which must log information pretty frequently, say, twice in a second. I wish to save the information to an sqlite database, however I don't mind to commit changes to the disk once every ten minutes.

Executing my queries when using a file-database takes to long, and makes the computer lag.

An optional solution is to use an in-memory database (it will fit, no worries), and synchronize it to the disk from time to time,

Is it possible? Is there a better way to achieve that (can you tell sqlite to commit to disk only after X queries?).

Can I solve this with Qt's SQL wrapper?

Elazar Leibovich
  • 32,750
  • 33
  • 122
  • 169
  • Why not a ramdisk, then simply copy it back every x seconds and in the teardown? Much better to deal with a file than to deal with queries imho. Ramdisk file could be retrieved even if the db fails. It's just a file. – m3nda Mar 06 '23 at 20:21

3 Answers3

16

Let's assume you have an on-disk database called 'disk_logs' with a table called 'events'. You could attach an in-memory database to your existing database:

ATTACH DATABASE ':memory:' AS mem_logs;

Create a table in that database (which would be entirely in-memory) to receive the incoming log events:

CREATE TABLE mem_logs.events(a, b, c);

Then transfer the data from the in-memory table to the on-disk table during application downtime:

INSERT INTO disk_logs.events SELECT * FROM mem_logs.events;

And then delete the contents of the existing in-memory table. Repeat.

This is pretty complicated though... If your records span multiple tables and are linked together with foreign keys, it might be a pain to keep these in sync as you copy from an in-memory tables to on-disk tables.

Before attempting something (uncomfortably over-engineered) like this, I'd also suggest trying to make SQLite go as fast as possible. SQLite should be able to easily handly > 50K record inserts per second. A few log entries twice a second should not cause significant slowdown.

Community
  • 1
  • 1
Mike Willekes
  • 5,960
  • 10
  • 33
  • 33
5

If you're executing each insert within it's own transaction - that could be a significant contributor to the slow-downs you're seeing. Perhaps you could:

  • Count the number of records inserted so far
  • Begin a transaction
  • Insert your record
  • Increment count
  • Commit/end transaction when N records have been inserted
  • Repeat

The downside is that if the system crashes during that period you risk loosing the un-committed records (but if you were willing to use an in-memory database, than it sounds like you're OK with that risk).

Mike Willekes
  • 5,960
  • 10
  • 33
  • 33
2

A brief search of the SQLite documentation turned up nothing useful (it wasn't likely and I didn't expect it).

Why not use a background thread that wakes up every 10 minutes, copies all of the log rows from the in-memory database to the external database (and deletes them from the in-memory database). When your program is ready to end, wake up the background thread one last time to save the last logs, then close all of the connections.

Craig Trader
  • 15,507
  • 6
  • 37
  • 55
  • Because then I need to create the logs, remember which part of the log I already saved, maintain the background thread, and largely speaking implement synchronization myself. That's possible, but I want to avoid that. – Elazar Leibovich Jun 22 '10 at 20:31