1

I am new to databases and transactions. I find a lot of different information up-to-date and am currently trying to organize my thoughts. Regarding the context, I'm trying to test the current isolation level with the SQL Server, but I don't succeed.

For this I use Spring Transaction together with Eclipse Link. I found some information about Spring transactions (https://www.marcobehler.com/guides/spring-transaction-management-transactional-in-depth). However, there is still a concept of the works unit and now I don't know what is currently being used (https://wiki.eclipse.org/Introduction_to_EclipseLink_Transactions_(ELUG)#Unit_of_Work_Architecture)

What I am trying to test: I have an Entity User (id, firstname, lastname). I have an entry in table id = 1, firstname = foo lastname = bar

I have a service and transaction. IMHO default is isolation READ_COMMIT for SQL Server

@Transactional
public User updateUser(){
   User updateUser = new User(1, "new firstname", "new lastname");
   User updatedUser = userRepository.save(updateUser); --> em.merge
   return updatedUser;
}

So far so good. What I do not understand now. I set a breakpoint in the return. At the same time I opened a second SQL client and executed the following SQL.

SET IMPLICIT_TRANSACTIONS ON 
UPDATE 
    User 
SET 
    lastname = 'complete new', 
WHERE 
    id = 1
COMMIT TRAN 

What I would expect is that the SQL statement will wait for the Spring transaction to complete. BUT this is currently not the case, the SQL statement is simply carried out.

Then lastname is in the table lastname "complete new" then I resume the breakpoint and then the lastname is "new lastname". This behavior I cannot understand. Is this normal or is this because of the unit work of eclipse link?

Dale K
  • 25,246
  • 15
  • 42
  • 71
JavaN00b
  • 11
  • 1
  • 1
    Add an `em.flush()` to your repository `save` to see anything. The way it is now, the database doesn't have a chance to acquire a write lock since JPA delays SQL until the transaction commit. – Alexey Veleshko Dec 17 '21 at 22:04
  • Mhm after the transaction spring will execute a commit. This is a spring data repository why should I need a flush? – JavaN00b Dec 18 '21 at 08:23
  • 1
    EclipseLink buffers SQL statements that need to be executed for as long as possible in order to reduce the lock time in the RDBMS. In your particular case the JDBC driver will receive the UPDATE statement when Spring Data JPA commits the transaction. You can verify it by enabling SQL logging in EclispeLink: https://stackoverflow.com/q/2374395/17695211 – Alexey Veleshko Dec 18 '21 at 10:07
  • After enabling SQL logging you'll see that there won't be any SQL debug output in the console at your breakpoint. It will appear **after** the return. If you really want to see the locking effect, you need to write the repository without Spring Data JPA with a `@PersistenceContext`-injected `EntityManager` and call `EntityManager.flush` which will flush SQL statement buffer of EclipseLink to the JDBC driver **before the breakpoint**. – Alexey Veleshko Dec 18 '21 at 10:11
  • Alternatively (and if you feel adventurous), you may try looking for a place in the EclipseLink source code where it executes the corresponding `PreparedStatement`, and set the breakpoint right after it. – Alexey Veleshko Dec 18 '21 at 10:18
  • That's exactly what I saw in the log, that's why I saw that with unit of work. In concrete terms, does that mean that spring Transaction does not open a transaction on the database correctly at doBegin, but only when the statement of the statements is executed? Does that mean that the transaction handling between Hibernate and EclipseLink is different? – JavaN00b Dec 18 '21 at 15:49
  • Spring starts the transaction before your method and commits it (or rolls it back if there were exceptions) after your method. There is nothing wrong here, and it's the same in Hibernate. However, a started transaction doesn't block execution by itself. It is SQL statements that execute inside of the transaction lock **the corresponding rows** of the table **until the end of the transaction**. At the time of your breakpoint no SQL reached the database yet, therefore no locks acquired and the other transaction of yours will go through swiftly. – Alexey Veleshko Dec 18 '21 at 21:14
  • I'm also not sure that two UPDATE statements will block anything. The second transaction should probably be a SELECT query. But then, if your database has MVCC enabled, it won't block still. Nowadays databases are very good at not blocking when it's not necessary! – Alexey Veleshko Dec 18 '21 at 21:21
  • EclipseLInk's UnitOfWork predates JPA, and the JPA EntityManager interface just wraps one. They are just sessions/contexts meant to encapsulate work done in a transaction, and delay modifications, if possible, until the very end when they can group, order and even batch SQL statements more efficiently, and keep DB locking to a minimum. EclipseLink doesn't even try to get connections until it absolutely needs one, allowing connections to be reused and better shared, reducing the resources required, though containers usually get and tie up a connection anyway. – Chris Dec 20 '21 at 16:17
  • I have now found the right places to set breakpoints and the write actions work as expected. If you want to update the same row, then there is a lock on it and the second transaction have to wait. The transactions id can also be selected in the corresponding database table. However, how should it behave in a reading and writing transaction? Transaction A reads a row and acquires a read lock. Transaction B changes the same row. How should the behavior be with READ COMMITED transactions? The term for read locks is not clear whether these are very short or valid during the whole transactio – JavaN00b Jan 22 '22 at 21:13

1 Answers1

-1

As commented (and also 2, 3) by Alexei Veleshko:

EclipseLink buffers SQL statements that need to be executed for as long as possible in order to reduce the lock time in the RDBMS. In your particular case the JDBC driver will receive the UPDATE statement when Spring Data JPA commits the transaction. You can verify it by enabling SQL logging in EclispeLink: stackoverflow.com/q/2374395/17695211

After enabling SQL logging you'll see that there won't be any SQL debug output in the console at your breakpoint. It will appear after the return. If you really want to see the locking effect, you need to write the repository without Spring Data JPA with a @PersistenceContext-injected EntityManager and call EntityManager.flush which will flush SQL statement buffer of EclipseLink to the JDBC driver before the breakpoint.

Alternatively (and if you feel adventurous), you may try looking for a place in the EclipseLink source code where it executes the corresponding PreparedStatement, and set the breakpoint right after it.

blackgreen
  • 34,072
  • 23
  • 111
  • 129
David Browne - Microsoft
  • 80,331
  • 6
  • 39
  • 67
  • Then I test the whole thing in the new year with a SELECT and read transaction and execute a em.flush. After the flush i will add the breakpoint. Maybe it will be clearer when I test a dirty read case. – JavaN00b Dec 20 '21 at 10:07
  • UnitOfWorkImpl's commitToDatabase method gets called on pre-commit JTA events as well as on flush, so might be good spot to start with a breakpoint. You likely don't need the actual SQL statement execution - any point after but before Spring commits the transaction will do (so after the writeAllObjectsWithChangeSet call). – Chris Dec 20 '21 at 16:04
  • Thanks Chris. Trying to do that, I would like to test the effects and differences. If my read transactions were running on READ_COMMITTED_SNAPSHOT. As far as I know, the default for SQL Server is READ_COMMITTED. However, I haven't been able to notice any real difference so far. – JavaN00b Dec 21 '21 at 09:58