14

I have a function that needs to perform a long update on multiple large tables. During the update 2-3 tables at a time need to be locked in EXCLUSIVE mode.

Since not all the tables need to be locked at the same time, ideally I'd want to LOCK only those tables I'm updating at the time, and then remove the lock once I'm done.

Eg.

-- Lock first pair of tables
LOCK TABLE tbl1_a IN EXCLUSIVE MODE;
LOCK TABLE tbl1_b IN EXCLUSIVE MODE;

-- Perform the update on tbl1_a and tbl1_b

-- Release the locks on tbl1_a and tbl1_b
--  HOW???

-- Proceed to the next pair of tables
LOCK TABLE tbl2_a IN EXCLUSIVE MODE;
LOCK TABLE tbl2_b IN EXCLUSIVE MODE;

Unfortunately, there's no the equivalent of UNLOCK statement in plpgsql. The normal way to remove LOCK is to COMMIT the transaction, but that is not possible inside a function.

Is there any solution for this? Some way to explicitly release the lock before function is done? Or run some kind of sub-transaction (perhaps by running each update in a separate function)?

UPDATE

I accepted that there is no solution. I'll write each update into a separate function and coordinate from outside the db. Thanks everyone.

panta82
  • 2,763
  • 3
  • 20
  • 38

3 Answers3

11

In Postgres 11 or later, consider a PROCEDURE which allows transaction control. See:


With functions, there is no way. Functions in Postgres are atomic (always inside a transaction) and locks are released at the end of a transaction.

You might be able to work around this with advisory locks. But those are not the same thing. All competing transactions have to play along. Concurrent access that is not aware of advisory locks will spoil the party.

Code example on dba.SE:

Or you might get somewhere with "cheating" autonomous transactions with dblink:

Or you re-assess your problem and split it up into a couple of separate transactions.

Erwin Brandstetter
  • 605,456
  • 145
  • 1,078
  • 1,228
  • As functions are atomic, if we lock a table in a function and then call another function inside it, there is no need to lock the same table because it´s already locked by the first (outer) functions within its own transaction, right? If we lock the same table in the second function it will be "omitted" because the table will be already locked and it will work as if we don´t lock it again in the second function, correct? – Ignacio Sep 01 '20 at 19:49
  • 1
    @Ignacio: That's correct. Locks are accumulated along the way within a transaction. And released at the end. No need to lock a table a second time (with the same lock strength). – Erwin Brandstetter Sep 02 '20 at 00:24
  • Thank you very much Erwin! Your contributions in PostgreSQL related topics are great! – Ignacio Sep 02 '20 at 17:18
4

In pg11 you now have PROCEDUREs which let you release locks via COMMIT. I just converted a bunch of parallel executed functions running ALTER TABLE ... ADD FOREIGN KEY ... with lots of deadlock problems and it worked nicely.

https://www.postgresql.org/docs/current/sql-createprocedure.html

Jan Katins
  • 2,219
  • 1
  • 25
  • 35
2

Not possible. From documentation: Once acquired, a lock is normally held till end of transaction. But if a lock is acquired after establishing a savepoint, the lock is released immediately if the savepoint is rolled back to. This is consistent with the principle that ROLLBACK cancels all effects of the commands since the savepoint. The same holds for locks acquired within a PL/pgSQL exception block: an error escape from the block releases locks acquired within it.

http://www.postgresql.org/docs/9.3/static/explicit-locking.html

user_0
  • 3,173
  • 20
  • 33