1

I am using SQLite with SQLAlchemy on a embedded linux system (think Raspberry Pi) to do some data logging.

The DB has a schema for configuration of sensors and adc boards. It has a measurement table which has columns for timestamp, value, adc_board_id (FK), channel_id (FK).

Measurements are saved to the DB every minute.

However I want to update an LCD with live measurements (every second).

The LCD app is a separate process and gets data to display from the DB.

Is there a way to have a table (a special table) just for live measurements that is only RAM based?

I want it accessible via the DB (like shared memory) but never to persist (i.e. never written to disk).

NOTE: I want most of the tables to persist on disk, but specify one (or more) special tables that never get written to disk.

Would that even work with separate processes accessing the DB?

The table will be written by only one process, but may be read by many processes, possibly via a View (once I learn how to do that with SQLAlchemy).

I did see some SQLite documentation that states:

The database in which the new table is created. Tables may be created in the main database, the temp database, or in any attached database.

Maybe using the temp database could do the trick?

Would it be accessible via other processes?

Alternatively, an attached database that lives on a ramdisk (/dev/shm/...)?

Are there any other techniques with DBs/SQLite/SQLAlchemy to achieve the same outcome?

BrendanSimon
  • 665
  • 1
  • 9
  • 23
  • A quick follow-up question to my answer below: Have you thought about using IPC? – exhuma Sep 11 '20 at 06:11
  • Thanks exhuma. Yes I did consider IPC and it's still an option. e.g. Using shared memory and pickle/json a dictionary of data for other processes to unpickle/unjson and use. I figured the DB option would still be a cleaner option if it was possible. – BrendanSimon Sep 11 '20 at 23:16
  • Personally, I would prefer IPC for this. It is simpler to implement, is a more "standard" aproach and hence more maintainable and has less "moving parts". Not to mention easier to set up, deploy and run. I would avoid pickle though it is too implementation dependent. I would choose one of json, messagepack, protobuf or even X.690. The choice depends on simplicity/popularity vs payload-size/transcoding-performance. – exhuma Sep 12 '20 at 07:30
  • CBOR is a nice alternative too ;-) It's fast and pretty much a drop-in replacement for the json library. `cbor2` on PyPi. – BrendanSimon Sep 12 '20 at 11:35
  • CBOR looks indeed interesting. Thanks for pointint it out to me :) – exhuma Sep 12 '20 at 12:03

1 Answers1

1

Storing an SQLite DB in memory, while sharing with another process

You can do this with a "ramdisk". This works just like a normal file-system, but uses RAM as physical storage. You can create one using the tmpfs filesystem:

mount -t tmpfs -o size=512m tmpfs /mnt/ramdisk

After that you can just store your SQLite database in /mnt/ramdisk and access it with another application.

Can two processes can access the same datatabse

Technically yes. But you should - in the case of sqlite - ensure that only one process has write access. This seems to be feasible in your case.

exhuma
  • 20,071
  • 12
  • 90
  • 123
  • Thanks exhuma. I was aware of the ramdisk option from other SO responses, however I want most of the tables to persist (written to disk), but not some specified tables. I have updated the question with that extra constraint. – BrendanSimon Sep 11 '20 at 23:09
  • SQLite supports three different threading modes (single thread, multi-threaded, and serialized), with serialized being the default. I assume serialized with take care of multiple write/reads across processes - yes/no? And since I only have one process doing the write it should be ok - yes/no? – BrendanSimon Sep 11 '20 at 23:19
  • @BrendanSimon re: "I want most of the tables to persist (written to disk), but not some specified tables" - Possibly related question [here](https://stackoverflow.com/q/6824717/2144390). – Gord Thompson Sep 11 '20 at 23:55