0

I have a table that is being references by more than 1800 tables and these references are likely to increase with time. The table has around 180,000 records at the moment and a delete operation takes 12 secs on 13 rows. Similarly it takes around 12 minutes to delete 700 rows.

Database Design:

We have a data logging application that can have any number of forms thorough which operators can enter data for thousands of parameters. The database design is inherited from an existing visualization application. It was a design requirement in visualization application to store each parameter's data in a separate table. The data is collected against events that are created on a defined frequency which can be hourly, daily and monthly and so on. So, now we have a table for each parameter which has a foreign key column for event as well as the user.

In future I can see this problem happening on user deletion as well.

I have tried indexing all the foreign key references and there is no improvement in performance as suggested in postgresql-and-primary-key-foreign-key-indexing

I have posted other details regarding my problem here

Please suggest a solution to optimize performance of delete, should I drop all the constraints?

edit: Explain analyze query returns something like this:

"Delete on tbl_schedule  (cost=0.42..41.85 rows=13 width=6) (actual time=4.017..4.017 rows=0 loops=1)"
"  ->  Index Scan using unique_key on tbl_schedule  (cost=0.42..41.85 rows=13 width=6) (actual time=0.025..0.047 rows=13 loops=1)"
"        Index Cond: (event_id_fk = 259)"
"Planning Time: 2.022 ms"
"Trigger for constraint tbl_103_186_fkey2: time=0.580 calls=13"
"Trigger for constraint tbl_103_187_fkey2: time=0.388 calls=13"
....
..
..
..
up to n(1800) tables.

Thanks in advance.

Ehsan Waris
  • 148
  • 2
  • 12
  • 3
    I see a potentially bigger problem here, namely that there are 1800 other tables which might have some connection to the table in question. Optimizations aside, if this table has massive use, then _any_ operation may not perform well. Maybe you can edit your question and tell us how you ended up with this design. – Tim Biegeleisen Jan 28 '20 at 05:54
  • 1
    Can you provide execution plan? explain analyze would be nice. But BE CAREFUL! analyze runs query so be sure you are in transaction, BEGIN; explain analyze ...; ROLLBACK; – Yavuz Selim Jan 28 '20 at 06:27
  • @TimBiegeleisen Only deletes and (horribile dictu) key updates would be affected. – Laurenz Albe Jan 28 '20 at 07:02
  • @TimBiegeleisen I have posted some details regarding design at:https://stackoverflow.com/questions/59928694/delete-query-in-postgresql-taking-too-long?noredirect=1#comment105980792_59928694 if you need some more info let me know. – Ehsan Waris Jan 28 '20 at 07:37
  • I agree with Tim, a single table referenced by that many tables might indicate a design that could be improved.@EhsanWaris: your link doesn't explain why `schedule` needs to be _referenced_ by so many other tables. –  Jan 28 '20 at 09:07
  • 1
    Totally unrelated to your question, but: https://dba.stackexchange.com/questions/154251/ –  Jan 28 '20 at 09:09
  • lets assume the design decision is not in my control at the moment. can we do anything to solve this problem? – Ehsan Waris Jan 28 '20 at 09:15
  • @TimBiegeleisen I have updated some details of design, hope it helps. – Ehsan Waris Jan 28 '20 at 10:16

0 Answers0