0

I'm using PostgreSQL 9.6 (Ubuntu 16.04) and I have a small table T of about 10k rows, where each row is updated 2 times per minute in peak times (something like UPDATE T SET c1 = ?, c2 = ? WHERE id = ?). Futhermore, this is the only update operation done in this table and insertions and deletions are not often at all.

However, I've noticed that SELECT queries in T are a bit slow and, after researching a bit, I've found out that "an update in PostgreSQL is actually a transaction of DELETE+INSERT (due MVCC)". Moreover, I've found similar issues like this one and this one, but regarding to UPDATE queries.

My question is: does frequent updates in a row slows down select queries? If so, what is the proper way to handle it?

Erwin Brandstetter
  • 605,456
  • 145
  • 1,078
  • 1,228
  • This depends on what the selects are. Are the updated columns in indexes and used as conditions on selects? What does the query plan say? – Sami Kuhmonen Jan 04 '18 at 12:01
  • 1
    you can use VACUUM after a while to maintain performance – lat long Jan 04 '18 at 12:10
  • @SamiKuhmonen: the updated columns are not in indexes and neither used as conditions on selects, though there's a ordering by one of these columns in one of the queries. I suppose frequent updates on non-indexed columns are better than indexed ones, right? Regarding the query plan, I'll update the question with the result of `EXPLAIN ANALYZE` – Diogo M. Santana Jan 04 '18 at 12:30
  • Much depends on the complete table definition, transaction handling, the exact queries you typically run and your autovacuum settings. [postgresql-performance] questions need more details. See instructions in the tag info. – Erwin Brandstetter Jan 04 '18 at 12:30

2 Answers2

0

Yes, as u said frequent updates/deletes might be a reason for slow queries. Since, any deleted row (be it actual DELETE or delete from an UPDATE) just marked as delete and actually stays in the data page until it is used for another insert. To avoid this situation, you should run proper maintenance routines like VACUUM on the table. Another simple solution would be,

create table similar_table;

insert into similar_table;

select * from original_table;

drop original_table;

alter table rename similar_table original_table;

This can be used for small tables instead of using VACUUM.

You should also take a look at the query plan. Poor queries can also make the selects slow.

0

If a row is deleted or updated, the old versions are cleaned up automatically by the auto-vacuum process and the space is re-used. If you only update non-indexed columns the "delete/insert" part isn't actually happening at all (called "HOT - Heap Only Tuples" updates).

It might be a good idea to create the table with a smaller fillfactor to leave space on the database block to store the "new" rows (you might want to try something like 60% or 70%)

"Frequent" updates are typically only a problem if they are so frequent that auto-vacuum can't keep up or if you have so many concurrent and open transactions that auto-vacuum can't free any space. Very often this can be mitigated by making auto-vacuum more aggressive.

A query doing a single row lookup through an indexed column is very unlikely to be affected by frequent updates anyway. If you do see slowdowns, then you might want to reindex the table on a regular basis.