2

I have a small database on an embedded system with some tables which I want to limit to 1000 rows. To do this, I have set up a trigger which is executed after every new row insertion. Trigger's code is:

CREATE TRIGGER after_insertion_trigger
AFTER INSERT ON table
BEGIN
    DELETE FROM table
    WHERE uuid IN (
        SELECT uuid FROM table ORDER BY sent ASC, timestamp DESC LIMIT -1 OFFSET 1000
    );
END;

Problem is, that insertions into a full table when using triggers take more than 3 minutes each instead of 0.4 seconds, which is the time needed to complete a normal transaction of insert and delete sentences.

The time needed to complete an insertion when using triggers seems to increase as database rows increase too.

I have found many different threads in Stack Overflow and other webs about triggers, but none of them helped me to solve the problem. I am using SQLite 3.7.9.

Does anybody know why am I getting this poor performance? I don't need a huge performance because insertions would not occur very often, but 3 minutes for an insertion is inadmissible.

old_timer
  • 69,149
  • 8
  • 89
  • 168
  • I think this might help you: http://stackoverflow.com/questions/2035670/limit-a-sqlite-tables-maximum-number-of-rows – Sergio Rosas Dec 29 '11 at 22:17

1 Answers1

3

Finally, I found what the problem was.

Triggers were terribly slow because the 'uuid' field was not an index (database schema was not optimized). After setting up the 'uuid' field as a primary key, insertions take around 0.7 seconds now.

This shows how important indexes are inside triggers, but I still doesn't understand why performance is so poor. I think I should take a look to SQLite's implementation of triggers to find it out.