This post is regarding a persistence issue with JPA. The JPA provider used is Oracle Toplink provided by weblogic 12c and is built using EclipseLink.
The user makes 'n' number of interactions/transactions and the app writes each transaction to the DB. Under heavy load, while writing the transactions, the app is facing duplicate key exceptions.
The 1st transactions is written successfully to the DB but the subsequent transaction is sometimes rejected with the duplicate key exception.
As i said the app uses JPA 2.0 in which the shared cache is enabled by default and i think this is something to do with shared cache.
I say this because the same app works fine in Weblogic 10 which uses JPA 1.0 and there is no concept of shared cache in there.
Now back to the issue, Each entity that takes part in the insert transaction is uniquely identified by an embedded primary key class with overridden hashcode/equals() (Please see below for the class definition).
@EmbeddedId
private CallerEntityPK pk;
//@Column attributes
}
@Embeddable
public class CallerEntityPK implements Serializable {
@Column(name="SESSION_ID")
private String sessionId; //FIRST_USER_SESSION,SECOND_USER_SESSION
@Column(name="TRANSACTION_NBR")
private String transNo; //01 , 02 etc...
//Getter setters
@Override
public boolean equals(Object o) {
if (o == this) {
return true;
}
if ( ! (o instanceof CallerEntity )) {
return false;
}
CallerEntity other = (CallerEntity ) o;
return this.sessionId.equals(other.sessionId)
&& this.transNo.equals(other.transNo;);
}
@Override
public int hashCode() {
final int prime = 31;
int hash = 17;
hash = hash * prime + this.sessionId.hashCode();
hash = hash * prime + this.transNo.hashCode();
return hash;
}
}
The primary key is the combination of sessionid(FIRST_USER_SESSION) and the transaction number (01 for first insert, 02 for second insert ....) For e.g: FIRST_USER_SESSION and 01
1st transaction pk: FIRST_USER_SESSION01 2nd transaction pk: FIRST_USER_SESSION02
Before writing the 1st insert transaction(entity with pk FIRST_USER_SESSION 01), its checked in the L2 cache and since its not in cache , its successfully persisted to DB.
After writing the first transaction , its updated in the L2 cache.(entity with FIRST_USER_SESSION 01 key is cached)
Now for the second insert transaction(the entity with key FIRST_USER_SESSION 02), the L2 cache is checked before persisting and and its my guess that entity for second transaction is considered identical to the one already in L2 cache. Even though the pk is different (FIRST_USER_SESSION02), i think the framework identifies it as the duplicate object.(based on the equals() and hashcode() overriden)
As a result the same duplicate object is attempted for insert and dulicate key exception is thrown.
Question 1) Is my understanding correct? The reason am asking this is every entity has unique key and this happens only during high volume. may be some other transactions(entities) are returning the same hashcode and making the object identical.
Question 2)
If this is the case, can i make the entity to use isolated cache and refresh always and expire instantly(as seen in the code below).
I just want the cache to be disabled for this entity, Please let me know your comments
@Entity
@Table(name="T_CALLER_TRANS")
@Cache(isolation=CacheIsolationType.ISOLATED, expiry=0, alwaysRefresh=true)
public class CallerEntity implements Serializable {
}
Question 3) After i make this change i need to load test the application. The user to the app interacts via MQ and HTTP. I need to put enough messages to the MQ