I'm using an sqlite3 database to record data every second. The interface to it is provided by Flask-SQLAlchemy.
This can work fine for a couple of months, but eventually (as the .db file approaches 8 GB), an error prevents any more data from being written to the database:
Failed to commit: (sqlite3.OperationalError) disk I/O error
The journal file does not seem to be the issue here - if I restart the application and use the pragma journal_mode=TRUNCATE
, the journal file is created but the disk I/O error
persists.
Here's the .dbinfo (obtained from sqlite3.exe):
database page size: 1024
write format: 1
read format: 1
reserved bytes: 0
file change counter: 5200490
database page count: 7927331
freelist page count: 0
schema cookie: 12
schema format: 4
default cache size: 0
autovacuum top root: 0
incremental vacuum: 0
text encoding: 1 (utf8)
user version: 0
application id: 0
software version: 3008011
number of tables: 6
number of indexes: 7
number of triggers: 0
number of views: 0
schema size: 5630
data version 2
However this worked:
- I made a copy of the .db file (call app.db and copy.db).
- I renamed app.db to orig.db
- I renamed copy.db to app.db (so effectively, I swapped it so that the copy becomes the app).
When I started my application again, it was able to write to the app.db file once more! So I could write to a copy I made of the database.
The drive is an SSD (Samsung 850 EVO mSATA)> I wonder if that's got something to do with it? Does anyone have any ideas on how I can prevent it from happening again?
EDIT: I've used the sqlite3.exe CLI to execute an INSERT INTO command manually, and this actually completed successfully (and wrote to the disk). However, when I re-ran my Flask-SQLAlchemy interface to write to it, it still came up with the disk I/O error.