I am running a Python script that inserts data into a SQLite database every minute; there are millions of rows in the database.
Is it possible that once the database grows to a certain size, it becomes corrupt (or inefficient) such that no more data can be inserted?
For example, when I FTP'd into my remote server today, I noticed that the database was 590,052,352 bytes in size. However, when I refreshed a few minutes later, the database hadn't grown in size (which it should have).
When looking at the 'last modified' timestamp of the database, I realised that no data had been inserted for more than 40 hours. Consequently, I had a look at the error logs, but there was nothing in there to indicate any issues.
I then renamed the database from data.db to data-1521564403.db and created a fresh new database named data.db. After doing this, the python script was able to insert data into this new database once again, without any issues.
I had noticed a similar issue last week. In that instance, the database reached a size of 632 MB, after which it began adding errors to the error log, saying duplicate data was being inserted into a field that had a UNIQUE constraint. However, I know that this could not have been possible because the python script makes sure to check that no duplicate data is being inserted. So I followed the same procedure of renaming the database and then creating a new one, and the issue was resolved, data could be inserted once again.
So my question is, do SQLite databases stop working as they should (or become inefficient) after they reach a certain size or contain a certain number of rows?
Thanks!