We are in process of migrating exisiting java application from Oracle DB to PostgreSQL and recently we have noticed following "unexpected" behaviour from PostgreSQL:
session#1:
db=> begin;
BEGIN
db=*> select r_object_id from dm_sysobject_s where r_object_id='08000000800027d6' for update;
r_object_id
------------------
08000000800027d6
session#2:
db=> begin;
BEGIN
db=*> select r_object_id from dm_sysobject_s where r_object_id='08000000800027d6' for update nowait;
ERROR: could not obtain lock on row in relation "dm_sysobject_s"
db=!> select r_object_id from dm_sysobject_s where r_object_id='08000000800027d6' for update nowait;
ERROR: current transaction is aborted, commands ignored until end of transaction block
while in Oracle everything works as expected:
session#1:
SQL> set autocommit off;
SQL> select r_object_id from dm_sysobject_s where r_object_id='0800012d80000122' for update;
R_OBJECT_ID
----------------
0800012d80000122
session#2:
SQL> set autocommit off;
SQL> select r_object_id from dm_sysobject_s where r_object_id='0800012d80000122' for update nowait;
select r_object_id from dm_sysobject_s where r_object_id='0800012d80000122' for update nowait
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
SQL> select r_object_id from dm_sysobject_s where r_object_id='0800012d80000122' for update nowait;
select r_object_id from dm_sysobject_s where r_object_id='0800012d80000122' for update nowait
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
SQL> commit;
I already read PSQLException: current transaction is aborted, commands ignored until end of transaction block topic, and got an understanding that the only way to preserve previous Oracle behaviour (i.e. keeping transaction active after unsuccessful attempt to lock row) in DB layer is to use savepoints. Ok, so far so good, and the code below is doing what is expected at least in cases when Supplier<T> supplier
is performing DB operations only (I do understand the risks of getting discrepancies between DB and persistence context in case when I perform more sophisticated operations backed up by savepoint):
@Override
public <T> T withSavepoint(SessionImplementor session, Supplier<T> supplier) {
return session.doReturningWork(connection -> {
DatabaseMetaData metaData = connection.getMetaData();
if (!metaData.supportsSavepoints()) {
return supplier.get();
}
boolean success = false;
Savepoint savepoint = null;
try {
savepoint = connection.setSavepoint();
T result = supplier.get();
success = true;
return result;
} finally {
if (savepoint != null) {
if (!success) {
connection.rollback(savepoint);
}
connection.releaseSavepoint(savepoint);
}
}
});
}
After some research I have discovered that implementation of savepoints in PostgreSQL
may cause severe performance issues, for example:
- Why we spent the last month eliminating PostgreSQL subtransactions
- PostgreSQL Subtransactions and performance
- PostgreSQL Subtransactions Considered Harmful
However, none of those blogposts actually provide information about what savepoint patters are safe and which aren't, so my question is following:
is it safe to use following savepoint pattern in PostgreSQL or not:
savepoint s1;
select id from tbl where id=? for update nowait;
rollback to/release s1;
I do see it is not possible to avoid XID growth, however I'm not sure about it's performance impact, what about other pitfalls?