I am accessing a postgresql table with serialization transaction isolation. I am doing something like this (with an existing psycopg2 connection conn
and cursor in that connection, cur
:
while True:
try:
cur.execute(query)
break
except TransactionRollbackError:
[sleep a little]
continue
except Exception:
[handle error here]
The point of this is to retry in case of serialization contention. Now, this works fine much of the time. But often I get this error after it has spent one iteration in the TransactionRollbackError
trap:
current transaction is aborted, commands ignored until end of transaction block
. Apparently, spinning this way to avoid serialization contention is not appropriate? Should I be doing this a different way?
Some notes: I am accessing the table with different processes (that are all the same and doing the same things: selecting, incrementing, and updating / inserting into the table.) Each of these processes has their own connection conn
, they are not sharing a connection.
Another note: it seems like after going through the TransactionRollbackError
exception block once, in the next spin of the while loop it ends up in the Exception
exception block instead.
Still another note: The number of processes running at the same time has a direct effect on the frequency of errors, in that more processes tends to produce more errors. So there is some kind of contention going on. I am under the impression that using serialized transaction isolation with retries (as in my demo code) would fix this.