2

I use libpqxx for my connection to postgresql. And everything was ok, until i run serialazable query on one table on one row.

table:

CREATE TABLE t1(id integer primary key);

postgres 9.4.4_x64

pqxx::connection c1(conn_str);
pqxx::connection c2(conn_str);

pqxx::transaction<pqxx::isolation_level::serializable> t1(c1);
t1.exec("INSERT INTO t1 (id) VALUES (25)");

pqxx::transaction<pqxx::isolation_level::serializable> t2(c2);
t2.exec("INSERT INTO t1 (id) VALUES (25)"); //hang here

t2.commit();
t1.commit();

my program hang forever. hang in PQexec function. Why? Is i think it must rollback one of transaction? but no? just hang.

UPDATE: same result for pure libpq:

c1 = PQconnectdb(conninfo);
c2 = PQconnectdb(conninfo);

res1 = PQexec(c1, "BEGIN");
PQclear(res1);


res1 = PQexec(c1, "INSERT INTO t1 (id) VALUES (104)");
PQclear(res1);

res2 = PQexec(c2, "BEGIN");
PQclear(res2);

res2 = PQexec(c2, "INSERT INTO t1 (id) VALUES (104)");
PQclear(res2);

res2 = PQexec(c2, "END");
PQclear(res2);

res1 = PQexec(c1, "END");
PQclear(res1);

postgresql 9.1 - same hang

3 Answers3

4

The hang has nothing to do with the serializable isolation level.

I'm no libpqxx expert, but your example appears to be running both transactions in a single thread. That's your problem.

t2.exec("INSERT INTO t1 (id) VALUES (25)");

The above statement has to wait for t1 to commit or rollback before completing, but t1.commit() never gets a chance to execute. Deadlock! This is absolutely normal, and will happen regardless of your chosen isolation level. This is just a consequence of trying to run statements from 2 concurrent transactions in the same thread of execution, not a good idea.

Try running both transactions on different threads, and your hang will go away.

sstan
  • 35,425
  • 6
  • 48
  • 66
  • The single thread might explain how two transactions are enough for a deadlock. – Erwin Brandstetter Aug 28 '15 at 17:23
  • one thread - but different connection. Differend connection is like different programs. – immortaldragon Aug 29 '15 at 07:20
  • No. Different connections in a single thread is ***not*** like different programs. To truly simulate a normal multi-user scenario, you need 1 thread per connection. Doing it that way ensures that `t1.commit();` eventually executes, which unblocks `t2.exec("INSERT INTO t1 (id) VALUES (25)");`. But running both transactions in a single thread means that `t1.commit();` never gets a chance to run, which is needed to unblock `t2`. Result: deadlock caused by the application. The database has no fault here, and is working exactly as expected. – sstan Aug 30 '15 at 04:10
  • Note that even if your own program has multiple threads, the postgres client library you're using could be single threaded and serialize all queries. In addition, if you're using something like pgbouncer with pool_mode=statement, similar issues may appear. Use pgbouncer with pool_mode=session and it should fix issues like this. – Mikko Rantalainen Apr 21 '20 at 08:16
1

If only the two transaction are involved, you should get a unique violation error for transaction t2 - exactly the same as with default READ COMMITTED isolation level:

ERROR:  duplicate key value violates unique constraint "t1_pkey"
DETAIL:  Key (id)=(25) already exists.

t1 tried the insert first and wins regardless which transaction tries to commit first. All following transactions trying to insert the same key wait for the first. Again, this is valid for READ COMMITTED and SERIALIZABLE alike.

A possible explanation would be that a third transaction is involved, which tried to insert the same key first and is still open. Or several such transactions, artefacts of your tests.

All transactions wait for the first one that tried to insert the same key. If that one commits, all other get a unique violation. If it rolls back, the next in line gets its chance.

To check look at pg_stat_activity (being connected to the same database):

SELECT * FROM pg_stat_activity;

More specifically:

SELECT * FROM pg_stat_activity WHERE state = 'idle in transaction';

Then commit / rollback the idle transaction. Or brute-force: terminate that connection. Details:

Community
  • 1
  • 1
Erwin Brandstetter
  • 605,456
  • 145
  • 1,078
  • 1,228
  • 1
    pg_stat_activity - display this 2 thransaction (backend_start, xact_start, query_start, state_change, waiting, backend_xid, backend_xmin): t1 ("2015-08-29 10:48:28.636+03", "2015-08-29 10:48:28.683+03", "2015-08-29 10:48:28.683+03", "2015-08-29 10:48:28.683+03", f, "idle in transaction", 757, _), t2 ("2015-08-29 10:48:28.667+03", "2015-08-29 10:48:28.683+03", "2015-08-29 10:48:28.683+03", "2015-08-29 10:48:28.683+03", t, "active", 758, 757) – immortaldragon Aug 29 '15 at 07:58
0

form postgres team:

"In concurrent programming, a deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does." https://en.wikipedia.org/wiki/Deadlock

definition it is not a deadlock in the first place. "c1" is not waiting for "c2" to finish and can happily go about its business with id=104. However, your application never gives c1 the next command because it is stubbornly waiting for "c2" to perform its insert - which it cannot due to the lock held by "c1".

Lock testing can be done within a single thread but deadlock testing implies that both connections need to simultaneously be attempting to execute a command - which a single program cannot accomplish without threading.

David J.

  • 2
    So it's not a deadlock within Postgres (which would be resolved automatically by terminating one (or more) of the competing transactions), but it is still a deadlock in your application. – Erwin Brandstetter Sep 02 '15 at 14:33