I have a table that is being references by more than 1800 tables and these references are likely to increase with time. The table has around 180,000 records at the moment and a delete operation takes 12 secs on 13 rows. Similarly it takes around 12 minutes to delete 700 rows.
Database Design:
We have a data logging application that can have any number of forms thorough which operators can enter data for thousands of parameters. The database design is inherited from an existing visualization application. It was a design requirement in visualization application to store each parameter's data in a separate table. The data is collected against events that are created on a defined frequency which can be hourly, daily and monthly and so on. So, now we have a table for each parameter which has a foreign key column for event as well as the user.
In future I can see this problem happening on user deletion as well.
I have tried indexing all the foreign key references and there is no improvement in performance as suggested in postgresql-and-primary-key-foreign-key-indexing
I have posted other details regarding my problem here
Please suggest a solution to optimize performance of delete, should I drop all the constraints?
edit: Explain analyze query returns something like this:
"Delete on tbl_schedule (cost=0.42..41.85 rows=13 width=6) (actual time=4.017..4.017 rows=0 loops=1)"
" -> Index Scan using unique_key on tbl_schedule (cost=0.42..41.85 rows=13 width=6) (actual time=0.025..0.047 rows=13 loops=1)"
" Index Cond: (event_id_fk = 259)"
"Planning Time: 2.022 ms"
"Trigger for constraint tbl_103_186_fkey2: time=0.580 calls=13"
"Trigger for constraint tbl_103_187_fkey2: time=0.388 calls=13"
....
..
..
..
up to n(1800) tables.
Thanks in advance.