I have the following table:
DROP TABLE IF EXISTS event;
CREATE TABLE event(
kind VARCHAR NOT NULL,
num INTEGER NOT NULL
);
ALTER TABLE event ADD PRIMARY KEY (kind, num);
The idea is that I want to use the num
column to maintain separate increment counters for each kind
of event. So num
is like a dedicated auto-increment sequence for each different kind
.
Assuming multiple clients/threads writing events (potentially of the same kind
) into that table concurrently, is there any difference in terms of the required level for transaction isolation between: (a) executing the following block:
BEGIN TRANSACTION;
DO
$do$
DECLARE
nextNum INTEGER;
BEGIN
SELECT COALESCE(MAX(num),-1)+1 FROM event WHERE kind='A' INTO nextNum;
INSERT INTO event(kind, num) VALUES('A', nextNum);
END;
$do$;
COMMIT;
... and (b) combining the select
and insert
into a single statement:
INSERT INTO event(kind, num)
(SELECT 'A', COALESCE(MAX(num),-1)+1 FROM event WHERE kind='A');
From some tests I run it seems that in both cases I need the serializable
transaction isolation level. What's more, even with the serializable
transactions isolation level, my code has to be prepared to retry due to the following error in highly concurrency situations:
ERROR: could not serialize access due to read/write dependencies among transactions
DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt.
HINT: The transaction might succeed if retried.
In other words merging the select
into the insert
does not seem to confer any benefit in terms of atomicity or allow any lower/more lenient transaction isolation level to be set. Am I missing anything?
(This question is broadly related in that it asks for a PostgreSQL pattern to facilitate the generation of multiple sequences inside a table. So just to be clear I am not asking for the right pattern to do that sort of thing; I just want to understand if the block of two statements is in any way different than a single merged INSERT
/SELECT
statement).