By wrapping pretty much everything in transactions and taking care that the code created is free of deadlock possible code (really understanding the data model, writing out process flows), they prevent deadlocks. InnoDB tends to deadlock real easy with it current way of page locking.
Also I think they do not use InnoDB for any serious code since that even deadlocks on page locks. The larger commercial databases like Oracle and MS SQL Server handle this using page locks and row locks in a more granular fashion. Also Oracle for example can be controlled better by for example controlling the page fill and empty percentage, which reduces the chance on page locks blocking transactions (not deadlocking, just blocking for a while).
As for hibernate
or any other code: Once you run into a deadlock situation the code usually needs to be improved. The idea is that you are in control of restarting the transaction (giving you in some cases a dead lock loop: Dead lock, restart and dead lock again etc). This kind of code can be trivial:
- Dead lock exception occurs;
- Catch the dead lock exception only in the initial calling function;
- Call the initial calling function again with the same parameters.
InnoDB with high transaction volume on even single record transactions deadlocks. This should not happen and is a MySQL (and derivatives issue). If you are fighting this technical issue it is a lottery: You can enhance your chances by changing the ACID compliance or by changing the technical/logical table layout for example by partitioning. Partitioning will reduce the time spend by the DBMS on updating the indexes thus shortening the transaction time and thus reducing the chance of hitting a deadlock due to record lock escalation. Reducing the page size to a smaller page size has a similar effect. However: It just reduces the chance (so you see less deadlocks but they still occur).
There are two possible solutions to avoid the deadlock completely:
- Rewrite the code so that a singleton handles the table in which the deadlock occurs. The code has to be partitioned so that the code executing the action on this table is in a single transaction and all the other code is in other transactions. This breaks any transactional design and if another issue occurs, it is a nightmare to roll back the transaction (which is now at least 2 or maybe even 3 transactions).
- Switch DBMS engine: If you use hibernate this is pretty straight forward except that you have to learn to handle the internals of this new DBMS.
Handling the deadlocks with hibernate/Spring/JPA @Transaction
:
Use a singleton design with preferable a one directional flow:
Function A calls Function B,C,D however B,C,D are not returning any data to A except a confirmation or a key (if possible).
Hibernate rolls back the changes on the cache in case of an exception, so the main worry is in the calling functions. As long as they are stateless, the most important thing is that the transaction is started function A:
Some user calls the controller with action. This action calls function A:
public String someControllerAction(...) {
try {
... some work, should be minimal else we have to undo this in the catch
saveMyData(...);
} catch(TransactionException exp) {
... undo some work
/* This is a loop, so it can get stuck. You can keep a loop
counter or another check to prevent getting stuck forever.
You can also throw this back to the user with a **please retry
button** */
someControllerAction(...);
}
// Starting transaction so that it can be restarted.
@Transactional
private saveMyData(...) throws TransactionException {
try {
... some work
} catch(TransactionException exp) {
... some roll back work if required
Throw new TransactionException();
}
}
The exception thrown by the data access layer has to be caught outside the @Transaction
so that all the