0

While testing JPA optimistic locking, I have simple entity and corresponding service/controller. Locking property looks like this:

@Version
@Column(name = "opt_lock", nullable = false)
private short optLock;

Testing scenario. I have 1 preexisting record. Entity contains ID, optLock, and 2 data fields valueA and valueB. Both are set to 1. .save method call has breakpoint on it, and it's wrapped in try-catch catching all RuntimeExceptions. Same is whole transaction initiated via TransactionTemplate.

On controller I call PUT method to update values, valueA and valueB respectively, to 2-5. Entity is read by ID, only values valueA and valueB are updated, no touching lock field. Hangs on .save method call breakpoint, which is configured to block single thread only.

Same action again, but with values 200-1. Thread is also blocked.

I unpause second update, verify that data were correctly updated to 200-1, and in issued statement correct 200-1 values are sent, also optLock value used in update statement was the one which exist in DB in preexisting record. Correct.

I unpause thread related to first update, I see, that there is attempt to update record with correct values 2-5, correct optLock was used, ie. the one which exist in db before this action. At this moment, combination ID-optLock does not exist in db. Thus this record cannot be updated. And it wasn't. Correct!

But ... no exception. Why?

In attempt to debug it, I tried to remove net.ttddyy.datasource-proxy from project if it's not swallowing it (it doesn't), I tried if the whole scenario starts to fail if I remove @Version annotation (it does), I tried to debug through hibernate code, but I did not find any problem at all.

Any suggestions what can cause missing exception?

UPDATE: replaced spring stuff with plain EntityManager and it's the same. Tried to run it outside of Intellij IDEA, as it turned out, it sometimes can do very surprising stuff with your db calls, and it's the same. More surprisingly (to me) if I introduce flush&clear calls and put breakpoint on them, Thread suspended on line after flush and clear somehow holds lock to persistence context or something, but second thread just suspends until first one is done, so this cannot be used to simulate lock exception, thus I believe also cannot cause problem in real-life if access to persistence context is synchronized (maybe in default setting, which would be my current setting).

Martin Mucha
  • 2,385
  • 1
  • 29
  • 49
  • Are you committing the transactions in your tests? The error will only happen when you commit the 2nd transaction. In other words, when the threads exit the outmost `@Transactional` annotation. If you want take a look at this test I have that throws an OptimisticLockException: [https://github.com/augusto/jpa-workshop/blob/master/src/test/java/com/ig/training/hibernate/domainmodel/versioning/ConflictTest.java] – Augusto Sep 08 '20 at 20:58
  • yes, I do commit the transactions. I'm "testing" this behavior in normal production code, no mocking or anything test-related is involved. Data are persisted into oracle db after these actions. As said, if I just remove `@Version` annotation, both updates will overwrite each other and will reach db with "lost update" behavior. With `@Version` it seems to work, commit which would overwrite data inducing lost update behavior is just ignored. Thus from db content perspective it seems to work, correct data are in db. Just no exception. – Martin Mucha Sep 09 '20 at 07:26
  • I've create minimal working example — and it works, exception is thrown there. I just used busy waiting instead of breakpoints. I ported that verification code back to original app, it does not work there. Same dependencies, same structure ... Just different dbs and a LOT more code and dependencies. So since the code is OK, verified by minimal working example, I need to take updated original app and start killing code & dependencies to find out who is doint that. – Martin Mucha Sep 09 '20 at 14:56
  • see my response. It's an oracle driver bug. – Martin Mucha Sep 10 '20 at 13:06

1 Answers1

1

Problem cause: In our case, the problem was caused by batch updates, configured as:

spring.jpa.properties.hibernate.jdbc.batch_size=50
spring.jpa.properties.hibernate.order_inserts=true
spring.jpa.properties.hibernate.order_updates=true
spring.jpa.properties.hibernate.jdbc.batch_versioned_data=true

and using buggy oracle driver:

<dependency>
  <groupId>com.oracle</groupId>
  <artifactId>ojdbc6</artifactId>
  <version>11.2.0.3</version>
</dependency>

which we kept using because some of our production is still on weblogic, and getting dependencies right for weblogic need not to be that easy, so we opt for not touching things which works.

Because of bug in this driver, combination of jpa optimistic lock and batch updates does work, but does not throw exception. If we comment ...batch_size and ...batch_versioned_data it seems to work. College even saw it working with just commenting out ...batch_versioned_data.

Links:

it does not seem to be oracle specific bug, others dbs does have it as well: Hibernate optimistic locking different behavior between Postgres and MariaDb

Optimistic locking batch update

Hibernate saves stale data with hibernate.jdbc.batch_versioned_data

Solution:

as @Vlad Mihalcea suggests in one link mentioned above, upgrade oracle driver to following seems to help

<dependency>
  <groupId>com.oracle.ojdbc</groupId>
  <artifactId>ojdbc8</artifactId>
  <version>19.3.0.0</version>
</dependency>
Martin Mucha
  • 2,385
  • 1
  • 29
  • 49