0

We work with DB2 effective version 8 (more or less, so no CUR_COMMIT) on z/OS.

In our (Java based, though this should not be relevant) application a method exists which runs in a transaction and deletes multiple records from a table called, say, MY_TABLE, based on the value of a certain column which we will call SPECIAL_COLUMN, executing the statement

DELETE FROM MY_TABLE WHERE SPECIAL_COLUMN=?

Apart from executing this statement, some other SQL statements get executed which I omit because for the moment I think they are perhaps not relevant for the problem I describe.

Running the method concurrently we sometimes see the exception

nested exception is com.ibm.db2.jcc.am.SqlException: 
UNSUCCESSFUL EXECUTION CAUSED BY DEADLOCK OR TIMEOUT. REASON CODE 00C90088, TYPE OF RESOURCE 00000302, AND RESOURCE NAME ... SQLCODE=-913, SQLSTATE=57033, DRIVER=3.63.131 

thrown during the execution of the DELETE FROM MY_TABLE WHERE SPECIAL_COLUMN=? statement. According to http://www.idug.org/p/fo/et/thread=20542 this seems to be related to locks placed on "pages".

My questions are the following:

  1. Can in fact two DELETE statements executed concurrently for the same value of SPECIAL_COLUMN, to which multiple rows correspond, cause such deadlock (a scenario which I have in mind is something like the following: the first statement "puts a lock" on "1st page", second statement "puts a lock" on "2nd page", and then first statement waits for the lock on "2nd page", while the second statement waits for the lock on "1st page". Or is the placing of such locks is "atomic", meaning that if first statement has started to put locks, the 2nd will wait?

  2. Same question for different values of SPECIAL_COLUMN (seems more likely)

In the case such scenarios are possible and might be the reason for the deadlock observed (otherwise we'll have to examine the "unsuspicios" so far SQL), which might be a reasonable solution? (I have thought on synchronizing the Java code, but I think it is not such a good idea; I have thought also on issuing SELECT FOR UPDATE on the rows to be deleted before doing delete, but since additional locks will be involved, I am quite in doubt also about that).

EDIT:

link to a report on a similar problem http://www.dbforums.com/showthread.php?575408-db2-OS390-TABLE-LOCK-DURING-DELETE

mustaccio
  • 18,234
  • 16
  • 48
  • 57
John Donn
  • 1,718
  • 2
  • 19
  • 45
  • 1
    Yes both are possible, so? – mustaccio Mar 29 '17 at 11:59
  • @mustaccio thank you, I actually asked (at the end of my question) what might be a reasonable solution for this problem. Could you suggest a possible path? – John Donn Mar 29 '17 at 18:07
  • 1
    If you delete uniquely identified records one by one, in the presence of a proper index, while committing frequently, you are unlikely to experience deadlocks. Any other solution will be subject to lock conflicts and/or race conditions to various extents. A more detailed answer is not possible without the intimate knowledge of your database and applications. – mustaccio Mar 29 '17 at 18:35
  • Thank you very much, I will do it this way (remembering of course to delete the rows in a well defined order, e.g. ordered by the primary key). – John Donn Mar 29 '17 at 20:23
  • The order of deletion is totally not important. I assume you are aware of performance implications of deleting rows one by one though. There's always a trade-off. – mustaccio Mar 29 '17 at 20:26
  • I am not sure I understand you. Take 2 concurrent transactions, the first executes **delete from my_table where special_column=1; delete from my_table where special_column=2;** the second executes **delete from my_table where special_column=2; delete from my_table where special_column=1;**. There **will** be a deadlock if the records exist and if transaction 2 deletes record with special_column=2 before the first transaction does. Just done the experiment with two instances of DbVisualizer simulating two concurrent transactions on DB2 Express (on Windows). – John Donn Mar 29 '17 at 20:38
  • I mentioned "commit frequently" for a reason; ideally each delete should be in its own transaction for this to work. I also mentioned "uniquely identified" records; clearly your `special_column` does _not_ identify records uniquely. – mustaccio Mar 29 '17 at 20:43
  • You are right about special_column, this should have been the primary key instead. What I meant is that if you select the records to delete by doing **select id from my_table where special_column=? order by id with ur**, ur being OK for my use case, then you can cycle in each transaction on this sequence deleting records one by one **presumably** without running into deadlocks (but perhaps taking into consideration also locks on indices changes this conclusion?). In our case such sequence of ids has length of about 10, so it looks, at least to me, like "committing frequently". – John Donn Mar 29 '17 at 21:14
  • After several trials, I think I get your point - at least for the instance of DB2 on which I the problem I describe arises. In my case deleting "adjacent" (in some sense, not clear also to me), but different records from two different transactions, even using **where primary_key=?** will cause the second transaction to block until the first transaction commits. So the cure of "delete records one by one, ideally each deletion made in its own transaction" seems necessary, at least with our DB2 on z/OS instance where it looks like instead of row level locks, locks of other type are used. – John Donn Mar 31 '17 at 10:56
  • Any solution to this or best approach any one followed to avoid such dead lock ? – rinilnath Feb 26 '20 at 11:09

0 Answers0