0

Am trying to understand isolation levels and various issues ..... i.e. dirty read , non repeatable read , phantom read and lost update . Was reading about Non repeatable read

Had also read about Lost update

what I am confused about is to me both of these look very similar i.e. in NRR ( Non repeatable read ) Tx B updated the row between two reads of the same row by Tx A so Tx A got different results.

In case of Lost update - Tx B overwrites changes committed by Tx A So to me really it seems that both of these seem quite similar and related. Is that correct ?

My understanding is if we use 'optimistic locking' it will prevent the issue of 'lost update' (Based on some very good answers here )

My confusion : However would it also imply / mean that by using 'optimistic locking' we also eliminate the issue of 'non repeatable read' ? All of these questions pertain to a Java J2EE application with Oracle database.

NOTE : to avoid distractions I am not looking for details pertaining to dirty reads and phantom reads - my focus presently is entirely on non repeatable reads and lost update

akila
  • 667
  • 2
  • 7
  • 21

2 Answers2

0

Non-repeatable reads, lost updates, phantom reads, as well as dirty reads, are about transaction isolation levels, rather than pessimistic/optimistic locking. I believe Oracle's default isolation level is read committed, meaning that only dirty reads are prevented.

Non-repeatable reads and lost updates are indeed somehow related, as they may or may not occur on the same level of isolation. Neither can be avoided by locking only unless you set the correct isolation level, but you can use versioning (a column value that is checked against and increments on every update) to at least detect the issue (and take necessary action).

vlumi
  • 1,321
  • 1
  • 9
  • 18
  • disagree - with read committed isolation level ( which is default ) we can avoid lost updates by using optimistic locking and thereby not run into lost update. However I am still looking for an answer for non repeatable read - need clarity if by using optimistic locking non repeatable read is dealt with ? – akila Nov 13 '19 at 03:19
0

The purpose of repeatable reads is to provide read-consistent data:

  • within a query, all the results should reflect the state of the data at a specific point in time.
  • within a transaction, the same query should return the same results even if it is repeated.

In Oracle, queries are read-consistent as of the moment the query started. If data changes during the query, the query reads the version of the data that existed at the start of the query. That version is available in the "UNDO".

Bottom line: Oracle by default has an isolation level of READ COMMITTED, which guarantees read-consistent data within a query, but not within a transaction.

You talk about Tx A and Tx B. In Oracle, a session that does not change any data does not have a transaction.

Assume the default isolation level of READ COMMITTED. Assume the J2EE application uses a connection pool and is stateless.

  • app thread A connects to session X and reads a row.
  • app thread B connects to session Y and updates the row with commit.
  • app thread A connects to session Z and reads the same row, seeing a different result.

Notice that there is nothing any database can do here. Even if all the sessions had the SERIALIZABLE isolation level, session Z has no idea what is going on in session X. Besides, thread A cannot leave a transaction hanging in session X when it disconnects.

To your question, notice that app thread A never changed any data. The human user behind app thread A queried the same data twice and saw two different results, that is all.

Now let's do an update:

  • app thread A connects to session X and reads a row.
  • app thread B connects to session Y and updates the row with commit.
  • app thread A connects to session Z and updates the same row with commit.

Here the same row had three different values, not two. The human user behind thread A saw the first value and changed it to the third value without ever seeing the second value! That is what we mean by a "lost update".

The idea behind optimistic locking is to notify the human user that, between the time they queried the data and the time they asked to update it, someone else changed the data first. They should look at the most recent values before confirming the update.

To simplify:

  • "non-repeatable reads" happen if you query, then I update, then you query.
  • "lost updates" happen if you query, then I update, then you update. Notice that if you query the data again, you need to see the new value in order to decide what to do next.

Suggested reading: https://blogs.oracle.com/oraclemagazine/on-transaction-isolation-levels

Best regards, Stew Ashton

Stew Ashton
  • 1,499
  • 9
  • 6
  • Thank you for a detailed explanation . After reading your answer - it leads me to a followup question ( deviating from my question above ) - so is NRR really a problem ? if yes why ? is there a real world example / scenario to co-relate to where we can say NRR is a problem ? Thank you – akila Nov 14 '19 at 10:57
  • To say "NRR is a problem" is another way of saying: "we need repeatable reads". I think it is more useful to say "we need read consistency" for the duration of a query or a transaction. Oracle provides read consistency that covers both NRR and phantom reads. The question really is, do we want read consistency that covers several queries within a transaction? I would say database or schema exports are a scenario where we need all the data to be consistent. – Stew Ashton Nov 14 '19 at 23:25