0

One of the query is causing my Postgres to freeze and it also results in some weird behaviour such as increased read/write IOPS and db eating all the space on the device. Here's some graphs which demonstrate the same.

Before deleting the query

Free Storage

After deleting the query

Free Storage

Any idea why is this happening?

Sudhanshu Mishra
  • 2,024
  • 1
  • 22
  • 40
  • "Any idea why is this happening?" No because this question is unclear and incomplete.. Share the table structure ( https://stackoverflow.com/questions/2593803/how-to-generate-the-create-table-sql-statement-for-an-existing-table-in-postgr ) and some example data on http://www.sqlfiddle.com also show us the query.. – Raymond Nijland May 21 '18 at 12:08
  • 2
    Queries use temp files if they need to sort data or do something else with it and don't fit into memory limit set with `work_mem`. I saw badly done queries that would take 200-500GB of storage and still increasing. Sometimes this can't be avoided, but can't really say with no query or `explain` provided. – Łukasz Kamiński May 21 '18 at 12:13
  • Could you please add more details about the query? – backslash112 Jan 08 '19 at 16:51

2 Answers2

1

In my experience this happens when:

  • There are lots of dead tuples in the tables of the DB.
  • The query execution is using space in disk (temporary files are generated during query execution and the work_mem is low).
  • You have lots of orphan files (less common).

If you want to read the official docs: https://aws.amazon.com/premiumsupport/knowledge-center/diskfull-error-rds-postgresql/

calhappy
  • 31
  • 3
0

Could be many options:

  1. Firstly, it depends on the size of your database. Could you provide some additional information?
  2. What does your query?
  3. What is the size of your connection pull?
  4. Do you use Streaming replication?

It seems to me that this could be indexing of your table Try checking indexing of table(s) that is(are) affected by the query. Also, the problem could be a very large database that requires a lot of RAM to be processed.

Don't forget to checkout joins that are included in the query. Badly formed joins could lead to unwanted cross-joins.

Number16BusShelter
  • 590
  • 2
  • 7
  • 17
  • The database size is roughly 6.5 GB. I'll update the question with the query in a while. – Sudhanshu Mishra May 21 '18 at 11:40
  • I think this is a big problem and such questions should be discussed. If we face the complexity of the question I think it's worth opening discussion on GitHub. I've faced such problems on production servers and it brought me a lot of problems. So why not? :) – Number16BusShelter May 21 '18 at 12:18