5

I have a server function which is called by many clients, many times exactly at the same time. The server function does the following:

  • get param1 from client
  • creat object x (new objectx(param1))
  • check if object x exists in db (jpa select query)
  • if not exists add object x (jpa store entity)
  • add y (jpa store entity)

This goes wrong when two or more clients run the function at the same time, multiple x objects get added to the database.

I simply solved this by creating a singleton manager class with a synchronized method which does the above.

Works nicely, cause now the function can only be called by one client at a time. (but i do get a problem when there are 2 servers, but that isn't the case yet)

But i was wondering is there a better way to solve this problem with jpa?

TinusSky
  • 1,657
  • 5
  • 24
  • 32

3 Answers3

5

Yes, this whole operation should be executed inside a transaction, such as provided by JTA or Spring's @Transactional. If the transaction is isolated at the proper level (I think REPEATABLE_READ for this case), the underlying persistence system will ensure that colliding writes don't occur, either by blocking one transaction until the other is complete (essentially what synchronized does in Java), or by stopping and rolling back the second transaction when it detects the conflict (which you can then retry).

chrylis -cautiouslyoptimistic-
  • 75,269
  • 21
  • 115
  • 152
  • 1
    @AndreiI That's exactly what the different transaction-isolation levels are for. `READ_COMMITTED` won't prevent the scenario he's describing, but `SERIALIZED` absolutely will, and I *think* (but am not sure) that `REPEATABLE_READ` will as well. Additionally, given the scenario he's described, it sounds like the `param1` is a primary key or at least a unique, in which case even `READ_COMMITTED` will be good enough because the database will complain on the overlapping insert. – chrylis -cautiouslyoptimistic- Feb 13 '14 at 14:27
  • `SERIALIZED` will help, but he has definitely no constraint in DB, as he says there that he gets (without synchronization) two entries in DB. – V G Feb 13 '14 at 14:33
  • @AndreiI I didn't read that to mean that that's actually occurring, but you're definitely right that if this is a non-key column with no database constraint, that's the first problem to fix right there. – chrylis -cautiouslyoptimistic- Feb 13 '14 at 14:35
  • Yes, and you are right that with `READ_COMMITTED` + the constraint it would work without problems. – V G Feb 13 '14 at 14:37
  • 1
    I am not entirely sure however, I think `REPEATABLE_READ` is not sufficient as it does not protect against Phantom Reads. By reading the following post I would think that we are dealing with such a Phantom Read: https://stackoverflow.com/questions/11043712/what-is-the-difference-between-non-repeatable-read-and-phantom-read#:~:text=Non%2Drepeatable%20reads%20are%20when,or%20DELETES%20from%20another%20transaction. – Martin Feb 11 '21 at 07:52
5

The only foolproof solution I can think of is create database level Constraints. This is because

  1. More you try to keep real-time check on data Consistency in Java, more you will strangle the code. For example if actions which have access to changing data in particular table are fixed in beginning but can increase with expansion of application, it will get more and more cumbersome to keep track of all the Checks in individual action methods. And every time a new action is added in application which requires change the way check on data consistency is made you have to check if that change is cascaded properly on levels.
  2. Use of synchronized will impact on application performance as users have to wait in line while the action is being performed by other users, which I am pretty sure will not be acceptable situation in any case.
  3. Almost impossible to keep this check on logical level if there are multiple instances of application running either on same sever or multiple servers. or in many situation two entirely different applications trying to update same database.

What you need is create constraints on column values for simple data validation for example not null or unique etc. and for complex situation what I have done is create a trigger on database table which checks the value which are being inserted or updated and if it violates the condition it will SIGNAL SQLSTATE and even set a message which will provide reason for failure and you can handle these exception in your Java code for appropriate message to user. For this you will need to have little knowledge of PL/SQL and if you don't its not that difficult to learn and implement it. Have a look at this question here with some what similar context.

Now there are certain situation where you can not directly create database level constraints if the database you use does not support e.g. almost all NoSQL databases, if database is MySQL and engine for table is MyISAM etc. in those situations all you can do is use certain write commit locking by some common application code between multiple instances of application. I have not myself tried to figure out proper solution in these situation as this will rarely occur.

Community
  • 1
  • 1
Yogesh
  • 4,546
  • 2
  • 32
  • 41
0

Someone here answered a similar question (Double Checked Locking in Singleton), the solution is the double checked locking singleton. A small tweak I would make is that I would not create the object x but search for a row with value == param1 (if feasible) in the database.

Community
  • 1
  • 1
Smutje
  • 17,733
  • 4
  • 24
  • 41