1

I'm using an sqlite3 database to record data every second. The interface to it is provided by Flask-SQLAlchemy.

This can work fine for a couple of months, but eventually (as the .db file approaches 8 GB), an error prevents any more data from being written to the database:
Failed to commit: (sqlite3.OperationalError) disk I/O error

The journal file does not seem to be the issue here - if I restart the application and use the pragma journal_mode=TRUNCATE, the journal file is created but the disk I/O error persists.

Here's the .dbinfo (obtained from sqlite3.exe):

database page size:  1024
write format:        1
read format:         1
reserved bytes:      0
file change counter: 5200490
database page count: 7927331
freelist page count: 0
schema cookie:       12
schema format:       4
default cache size:  0
autovacuum top root: 0
incremental vacuum:  0
text encoding:       1 (utf8)
user version:        0
application id:      0
software version:    3008011
number of tables:    6
number of indexes:   7
number of triggers:  0
number of views:     0
schema size:         5630
data version         2

However this worked:

  1. I made a copy of the .db file (call app.db and copy.db).
  2. I renamed app.db to orig.db
  3. I renamed copy.db to app.db (so effectively, I swapped it so that the copy becomes the app).

When I started my application again, it was able to write to the app.db file once more! So I could write to a copy I made of the database.

The drive is an SSD (Samsung 850 EVO mSATA)> I wonder if that's got something to do with it? Does anyone have any ideas on how I can prevent it from happening again?

EDIT: I've used the sqlite3.exe CLI to execute an INSERT INTO command manually, and this actually completed successfully (and wrote to the disk). However, when I re-ran my Flask-SQLAlchemy interface to write to it, it still came up with the disk I/O error.

John Go-Soco
  • 886
  • 1
  • 9
  • 20
  • 3.8.11 is really old. Upgrade the version of sqlite being used to the current version and see if the error still happens? Also if you can get the [extended error code](https://www.sqlite.org/rescode.html#extrc) that would be useful as it'll give a much more specific error. – Shawn Aug 16 '19 at 11:09
  • 1
    Probably wouldn't hurt to run a [PRAGMA integrity_check](https://www.sqlite.org/pragma.html#pragma_integrity_check) on the database too. – Shawn Aug 16 '19 at 11:16
  • @Shawn The integrity_check returns ok. Frustratingly, I can't get the extended error code simply because that isn't exposed by the sqlite3 python module (still an open case, I think: https://github.com/python/cpython/pull/1108), but I'll try to do a manual SQL insert using the CLI later. – John Go-Soco Aug 16 '19 at 11:27
  • I've updated the question regarding doing a manual SQL insert. Very very odd that the manual insert was a success, and also especially weird that making a copy of the file and using that copy also allowed writing to the db. – John Go-Soco Aug 16 '19 at 14:34
  • I've also now updated the sqlite3.dll version to 3.29 but the error persists. – John Go-Soco Aug 19 '19 at 09:09
  • 1
    I would personally check the SSD health status e.g. via CrystalDiskInfo – tukan Aug 19 '19 at 12:26

1 Answers1

1

UPDATE:

A colleague pointed out that this might be related to another question: https://stackoverflow.com/a/49506243/3274353

I strongly suspect now that this is a filesystem issue - in my system, the database file is being updated constantly alongside some other files which are also being written to.

So to reduce the amount of fragmentation, I'm getting the database to pre-allocate some disk space now using the answer provided in aforementioned question: https://stackoverflow.com/a/49506243/3274353

Something like this:

CREATE TABLE t(x);
INSERT INTO t VALUES(zeroblob(500*1024*1024));  -- 500 MB
DROP TABLE t;

To know whether this needs to be done, I use a call to the freelist_count pragma:

PRAGMA schema.freelist_count;
Return the number of unused pages in the database file.

John Go-Soco
  • 886
  • 1
  • 9
  • 20