0

I am designing a measurement instrument and the software is written in C++/Qt5.12 on a custom build Linux embedded system (Buildroot). The data are time series and fall into 2 categories :

  • actual physical data, 1..3 fields, sampling period 5 min
  • housekeeping parameters (temperatures, fow rates, etc.), 5..10 fields, sampling period 1..10 sec

I have been using CSV files so far, and they do the job. Although the data are not relational and the data acquisition rate is low, I am looking into SQLite because :

  • reduced risk to produce corrupted files in case of a crash thanks to transactions
  • more flexibility to alter the data format in the long run, e.g. add a column, with less impact on processing software
  • SQLite is supported by Buildroot

Questions :

  1. Does SQLite look like a smart choice over CSV in this case ?
  2. The instrument will be running 24/7 for years, so I guess I'll have to split the database into chunks (e.g. monthly) to keep the file reasonably small and for archiving. I wonder how easy that would be. Can it be automated with a cron job ?

Thanks.

dpeng
  • 395
  • 2
  • 4
  • 17
  • 1
    One is a file format, the other a database. That can also read CSV files. You can't compare them. `Although the data are not relational` in that case you can't use CSV at all. A CSV file is a very *simple* format that only works with a single "table" and a fixed number of fields. It's equivalent to a single table in a database, where all fields are strings – Panagiotis Kanavos Apr 22 '21 at 10:58
  • 1
    I would not switch to sqlite, it is much more difficult to look at the file and to repair it manually if it goes broken, if you do not have to run sql queries it appears you are just adding a complicated layer. – Marco Apr 22 '21 at 10:59
  • @Marco who's going to look at a file stored on a device? Besides, it's a **lot** easier to corrupt a text file than a database. The only way to modify a file is to either rewrite the entire file or append lines. A simple power failure at the wrong time will corrupt it in a way that can't be recovered without human intevention – Panagiotis Kanavos Apr 22 '21 at 11:01
  • Both CSV and SQLite have their place. SQLite is widely used in data acquisition and all kinds of devices. CSV is used when you only need to append lines to a file. That requires very few resources. Reading the data requires reading the entire file. Editing isn't possible, you have to rewrite the entire file *unless* records have a fixed size. SQLite on the other hand requires more resources, but as you mentioned, it's far more reliable. You can delete data if you need to. – Panagiotis Kanavos Apr 22 '21 at 11:06
  • See [Manipulation performance of Sqlite vs CSV file](https://stackoverflow.com/questions/40695905/manipulation-performance-of-sqlite-vs-csv-file/40696615#40696615). tl;dr Use SQLite. – Schwern Apr 23 '21 at 02:48

1 Answers1

1

Does SQLite look like a smart choice over CSV in this case ?

I'd suggest yes. Mainly because you would probably want to do something with the data other than spend the rest of your life looking through it.

Perhaps you want some sort of aggregated stats (a summary. averages, maximum value, minimum values perhaps to compare periods). SQLite can make that pretty easy and pretty efficient.

The instrument will be running 24/7 for years, so I guess I'll have to split the database into chunks (e.g. monthly) to keep the file reasonably small and for archiving. I wonder how easy that would be. Can it be automated with a cron job ?

Cron no need, utilise the power of SQLite, a TRIGGER could be handly.

Here's an example that shows a little of what you could do.

As you have 2 distinct sets of readings physical (table) and housekeeping (table) the example has a table per each.

  • the physical table has 1 column for the timestamp of the reading and 4 columns for the readings.
  • the housekeeping table has 1 column for the timestamp and 10 reading columns.

The example automatically generates data just_to_load some data to show results. The example has such a table that is used to control how much data is inserted, it has 1 row with 1 value (although it could have more rows) and this value is extracted to determine how much data is added.

  • with the value as 1000 1000 physical readings will be added for every 5 minutes (about 3.5 days worth of data).
  • with the value of 1000 then 300,000 rows will be added to the housekeeping table. i.e every 5 minutes 300 rows will be added.

The example demonstrates automated (TRIGGER) based tidying up (doesn't backup the data but will clear data from both the tables (just an example showing that you can do things automatically)). The TRIGGER is named auto_tidyup.

To know that the TRIGGER is being activated it additionally records the start and end of the TRIGGER's processing (what it does when activated and its WHEN clause condition is met (to reduce the times that it tries to do something)). This data is stored in another table namely tidyup_log.

  • The TRIGGER has been set so the WHEN clause is triggered (this would be changed after tested to a suitable schedule).

So in summary 4 tables (1 for testing purposes only) and 1 trigger.

When the data is loaded, the data is then used by 3 queries to extract useful data (well sort of).

The Example SQL (note that perhaps the most complicated SQL is for loading the testing data) :-

DROP TABLE IF EXISTS physical;
DROP TABLE IF EXISTS housekeeping;
DROP TRIGGER IF EXISTS auto_tidyup;
DROP TABLE IF EXISTS tidyup_log;
DROP TABLE IF EXISTS just_for_load;
CREATE TABLE IF NOT EXISTS physical(timestamp INTEGER PRIMARY KEY, fld1 REAL, fld2 REAL, fld3 REAL, fld4 REAL);
CREATE TABLE IF NOT EXISTS housekeeping(timestamp INTEGER PRIMARY KEY, prm1 REAL, prm2 REAL, prm3 REAL, prm4 REAL, prm5 REAL, prm6 REAL, prm7 REAL, prm8 REAL, prm9 REAL, prm10 REAL);
CREATE TABLE IF NOT EXISTS tidyup_log (timestamp INTEGER, action_performed TEXT);
CREATE TRIGGER IF NOT EXISTS auto_tidyup AFTER INSERT ON physical 
    WHEN CAST(strftime('%d','now') AS INTEGER) = 23 /* <<<<<<<<<< TESTING SO GET HITS >>>>>>>>>>*/
    /*WHEN CAST(strftime('%d','now') AS INTEGER = 1 */ /* IF TODAY FIRST DAY OF MONTH */
    BEGIN
        INSERT INTO tidyup_log VALUES (strftime('%s','now'),'TIDY Started');
        DELETE FROM physical WHERE timestamp < new.timestamp - (60 * 60 * 24 * 365 /*approx a year */); 
        DELETE FROM housekeeping WHERE timestamp < new.timestamp - (60 * 60 * 24 * 365);
        INSERT INTO tidyup_log VALUES (strftime('%s','now'),'TIDY ENDED');
    END
;
/* ONLY FOR LOADING Test Data controls number of rows added */
CREATE TABLE IF NOT EXISTS just_for_load (base_count INTEGER);
INSERT INTO just_for_load VALUES(1000); /* Number of physical rows to add 5 minutes e.g. 1000 is close to 3.5 days*/
WITH RECURSIVE counter(i) AS 
    (SELECT 1 UNION ALL SELECT i+1 FROM counter WHERE i < (SELECT sum(base_count) FROM just_for_load))
    INSERT INTO physical SELECT strftime('%s','now','+'||(i * 5)||' minutes'), random(),random(),random(),random()FROM counter
;
WITH RECURSIVE counter(i) AS 
    (SELECT 1 UNION ALL SELECT i+1 FROM counter WHERE i < (SELECT (sum(base_count) * 300) FROM just_for_load))
    INSERT INTO housekeeping SELECT strftime('%s','now','+'||(i)||' second'), random(),random(),random(),random(), random(),random(),random(),random(), random(),random()FROM counter
;

/* <<<<<<<<<< DATA LOADED SO EXTRACT IT >>>>>>>>> */
SELECT datetime(timestamp,'unixepoch'), fld1,fld2,fld3,fld4 FROM physical;
/* First query to basically show the 5 minute intervals (and lots of random values)*/

/* This query gets the sum and average of the 10 readings over a 5 minute window */
SELECT 
    'From '||datetime(min(timestamp),'unixepoch')||' To '||datetime(max(timestamp),'unixepoch') AS Range,
        sum(prm1)AS avgP1, avg(prm1) AS sumP1,
        sum(prm2)AS avgP2, avg(prm2) AS sumP2,
        sum(prm3)AS avgP3, avg(prm3) AS sumP3,
        sum(prm4)AS avgP4, avg(prm4) AS sumP4,
        sum(prm5)AS avgP5, avg(prm5) AS sumP5,
        sum(prm6)AS avgP6, avg(prm6) AS sumP6,
        sum(prm7)AS avgP7, avg(prm7) AS sumP7,
        sum(prm8)AS avgP8, avg(prm8) AS sumP8,
        sum(prm9)AS avgP9, avg(prm9) AS sumP9,
        sum(prm10)AS avgP10, avg(prm10) AS sumP10
FROM housekeeping GROUP BY timestamp / 300
;
/* This query shows that the TRIGGER is being activated (even though it does no deletions) */
SELECT * FROM tidyup_log;

/* Tidy up the Testing environment */
DROP TABLE IF EXISTS physical;
DROP TABLE IF EXISTS housekeeping;
DROP TRIGGER IF EXISTS auto_tidyup;
DROP TABLE IF EXISTS tidyup_log;
DROP TABLE IF EXISTS just_for_load;

Results

  1. Extract from the physical table (showing 5 minute intervals of the data aka data you probably don't want to look at)

enter image description here

  1. Extract more useful data averages and sums of each of the 10 readings every 5 minutes

enter image description here

  • 1001 rows because rows don't end start on a 5 minute boundary
  1. The tidyup log (to show the TRIGGER is being activated)

enter image description here

  • start and end for each physical row (noting that the WHEN criteria has been set to trigger on all) and hence 2000 rows

Lastly just to show 300000 rows part of the message log :-

WITH RECURSIVE counter(i) AS 
    (SELECT 1 UNION ALL SELECT i+1 FROM counter WHERE i < (SELECT (sum(base_count) * 300) FROM just_for_load))
    INSERT INTO housekeeping SELECT strftime('%s','now','+'||(i)||' second'), random(),random(),random(),random(), random(),random(),random(),random(), random(),random()FROM counter
> Affected rows: 300000
> Time: 1.207s
Mark Setchell
  • 191,897
  • 31
  • 273
  • 432
MikeT
  • 51,415
  • 16
  • 49
  • 68
  • Nice answer. Could you clarify what tool you used to produce the *"Results"* section please? And what the actual command is that you used to make these reports. Many thanks. – Mark Setchell Apr 27 '21 at 12:03
  • 1
    @MarkSetchell the tool used was Navicat Essentials for SQLite which and Windows Snipping tool was used to capture the screens presented after running the SQL with the exception of the extract from the message log which was simply copy and paste from the Message tab. – MikeT Apr 27 '21 at 19:46
  • Ok, many thanks for taking the time to answer. – Mark Setchell Apr 27 '21 at 20:18