0

currently I am trying to learn more on database, right now I'm on DB2, and have locktimeout setting on 30s to test it, I ran several thousand transaction simultaneously to stress test the server, and record the time taken for each stored procedure (SP) called. the timer start after declare line in the SP and end time recorded just before it ends. And I found that the transaction took much longer than 30 sec (even close to 70sec at some point). is there something that i did wrong (I only set LOCKTIMEOUT parameter in db cfg) or is it the way it is?

begin
//record the start time
//the actuan transaction
//record end time, and return the time as result sets
end

if lets say thread no 1,2,3 enter at the same time, thread 1 hold lock for 29sec, and thread 2 hold lock for 29sec, will the thread no 3 got killed for waiting for to long? if it doesnt get killed and completed the task in 10 sec, is the transaction time become 68sec or just 10sec?

Can anyone answer my confusion?

Fuad Hanif
  • 33
  • 7
  • platform and version? – Charles Jun 02 '15 at 15:38
  • im using DB2 10.5 ESE running on CentOS 6.5 – Fuad Hanif Jun 02 '15 at 16:14
  • It's going to depend on your statements and what transaction setting you're using. If you update the entire table, and will thus lock everything else out. If you update only one row by searching via primary key, you'll most likely only lock the one row, and updates from other transactions will interleave. Note that, unlike most application languages, a transaction block doesn't stop other threads from running that part of a procedure, it just keeps certain pieces of data in sync. – Clockwork-Muse Jun 03 '15 at 14:09

1 Answers1

0

Turned out to be my understanding about lock time is wrong. the transaction that ran for longer than the lock time is possible, due to the lock time out is going to kill the transaction that wait for longer than the time in this parameter, not the one that holding lock.

Fuad Hanif
  • 33
  • 7