6

As it is stated here http://dev.mysql.com/doc/refman/5.0/en/innodb-deadlocks.html:

... Normally, you must write your applications so that they are always prepared to re-issue a transaction if it gets rolled back because of a deadlock.

Also noted here https://stackoverflow.com/a/2596101/922584:

If you are using InnoDB or any row-level transactional RDBMS, then it is possible that any write transaction can cause a deadlock, even in perfectly normal situations.

From Hibernate documentation it seems to me that it is not prepared to handle Deadlocks in NO WAY. It seems to me that these transactions end up with TransactionRollbackException and then it's done. To be precise, I am using @Transactional annotations.

If this was true, all those critical systems would never be able to use Hibernate. How about all those bank/mobile operator systems deal with these?

Community
  • 1
  • 1
Vojtěch
  • 11,312
  • 31
  • 103
  • 173

2 Answers2

5

By wrapping pretty much everything in transactions and taking care that the code created is free of deadlock possible code (really understanding the data model, writing out process flows), they prevent deadlocks. InnoDB tends to deadlock real easy with it current way of page locking.

Also I think they do not use InnoDB for any serious code since that even deadlocks on page locks. The larger commercial databases like Oracle and MS SQL Server handle this using page locks and row locks in a more granular fashion. Also Oracle for example can be controlled better by for example controlling the page fill and empty percentage, which reduces the chance on page locks blocking transactions (not deadlocking, just blocking for a while).

As for hibernate or any other code: Once you run into a deadlock situation the code usually needs to be improved. The idea is that you are in control of restarting the transaction (giving you in some cases a dead lock loop: Dead lock, restart and dead lock again etc). This kind of code can be trivial:

  • Dead lock exception occurs;
  • Catch the dead lock exception only in the initial calling function;
  • Call the initial calling function again with the same parameters.

InnoDB with high transaction volume on even single record transactions deadlocks. This should not happen and is a MySQL (and derivatives issue). If you are fighting this technical issue it is a lottery: You can enhance your chances by changing the ACID compliance or by changing the technical/logical table layout for example by partitioning. Partitioning will reduce the time spend by the DBMS on updating the indexes thus shortening the transaction time and thus reducing the chance of hitting a deadlock due to record lock escalation. Reducing the page size to a smaller page size has a similar effect. However: It just reduces the chance (so you see less deadlocks but they still occur).

There are two possible solutions to avoid the deadlock completely:

  • Rewrite the code so that a singleton handles the table in which the deadlock occurs. The code has to be partitioned so that the code executing the action on this table is in a single transaction and all the other code is in other transactions. This breaks any transactional design and if another issue occurs, it is a nightmare to roll back the transaction (which is now at least 2 or maybe even 3 transactions).
  • Switch DBMS engine: If you use hibernate this is pretty straight forward except that you have to learn to handle the internals of this new DBMS.

Handling the deadlocks with hibernate/Spring/JPA @Transaction:

Use a singleton design with preferable a one directional flow:

Function A calls Function B,C,D however B,C,D are not returning any data to A except a confirmation or a key (if possible).

Hibernate rolls back the changes on the cache in case of an exception, so the main worry is in the calling functions. As long as they are stateless, the most important thing is that the transaction is started function A:

Some user calls the controller with action. This action calls function A:

public String someControllerAction(...) {
  try {
     ... some work, should be minimal else we have to undo this in the catch
     saveMyData(...);
  } catch(TransactionException exp) {
     ... undo some work
/* This is a loop, so it can get stuck. You can keep a loop 
counter or another check to prevent getting stuck forever. 
You can also throw this back to the user with a **please retry 
button** */
     someControllerAction(...); 
  }

// Starting transaction so that it can be restarted.
@Transactional
private saveMyData(...) throws TransactionException {
   try {
     ... some work
   } catch(TransactionException exp) {
     ... some roll back work if required
     Throw new TransactionException();
   }
}

The exception thrown by the data access layer has to be caught outside the @Transaction so that all the

Norbert
  • 6,026
  • 3
  • 17
  • 40
  • Thanks for explanation. As for Hibernate: when using `@Transactional` annotation, it helps to create nicely readable code and omits writing duplicite code for creating transactions. I can imagine this part for restarting transactions could be part of this annotation. Is it feasible? It just surprises me that this is not generally mentioned nor discussed, I run into these deadlocks quite often even with very simple transactions. – Vojtěch Jan 01 '16 at 11:36
3

My company has used Hibernate extensively in high transaction high concurrency system and I can say that not only does it not handle it, there is also no way to write code that will exclude deadlocks. Deadlocks WILL happen in any serious complex software system. Regardless of database engine or other things. You can certainly reduce the chances, but it will still happen and your code must be ready. The biggest problem with restarting a txn in Hibernate is that after deadlock happens POJOs end up in some indeterminate dirty state (some POJOs were saved to DB, others - not). In order to retry txn you must rebuild all objects from database which greatly complicates the code - basically it means that you can't just work with POJOs as with true POJOs. Instead the objects become one-time special use objects that are needed to interact with DB (I think that goes against the original concept of JPA).... So, to make long story short, we ended up modifying Hibernate and adding a special mechanism to "reset" its cache after failed transaction, so that transaction can be retried. This allowed us, among other things, to retry transactions inside a method (instead forcing a DB transaction to span entire java method call). In short, I agree with above that Hibernate (or I guess any JPA) is not well suited to mission critical high performance applications.