0

I have a PL/pgSQL function which takes data from a staging table to our target table. The process executes every night. Sometimes due to server restart or some maintenance issues we get the process executed manually.

The problem I am facing: whenever we start the process manually after 7 AM, it takes almost 2 hours to complete (read from staging table and insert into the target table). But whenever it executes as per schedule, i.e., before 7 AM, it takes 22-25 minutes on average.

What could be the issue? If required, I can share my function snippet here.

Erwin Brandstetter
  • 605,456
  • 145
  • 1,078
  • 1,228
Yousuf Sultan
  • 3,055
  • 8
  • 35
  • 63

1 Answers1

2

The typical reason would be general concurrent activity in the database, which competes for the same resources as your function and may cause lock contention. Check your DB log for activities starting around 7 a.m.

A function always runs as a single transaction. Locks are acquired along the way and only released at the end of a transaction. This makes long running functions particularly vulnerable to lock contention.

You may be able to optimize general performance as well as behavior towards concurrent transactions to make it run faster. Or more radically: if at all possible, split your big function in separate parts, which you call in separate transactions.

How to split huge updates:

There are additional things to consider when packing multiple big operations into a single function:

Erwin Brandstetter
  • 605,456
  • 145
  • 1,078
  • 1,228
  • Can you tell what does the idle status of a query indicate in Postgres? What I have realized is that there are some other application accessing the table being used by the above mentioned function. The queries of the other tables remain in 'idle' status even after their completion? How would this affect the performance? – Yousuf Sultan Jul 15 '15 at 04:34
  • @YousufSultan: The `idle` state is not necessarily suspicious - just a session that's waiting for input. `idle in transaction` and `idle in transaction (aborted)` would indicate problems. See http://stackoverflow.com/a/28596354/939860 – Erwin Brandstetter Jul 15 '15 at 13:13
  • Thanks.... The problem got resolved by closing the sessions of the other batch jobs that were accessing the table. They were opening there sessions but it was not getting closed. – Yousuf Sultan Jul 16 '15 at 06:48
  • @YousufSultan. As long as they close their *transaction*, an open session should not be a problem at all. – Erwin Brandstetter Jul 16 '15 at 11:44