-1

I have the following @Transactional method that's managed by a HibernateTransactionManager:

@Transactional(rollbackFor = NoActiveTraceException.class)
public void insertTraceEvent(TraceEvent traceEvent) throws NoActiveTraceException {
    Trace trace = traceDao.findActiveForNumber(traceEvent.getSourceParty());
    if (trace == null) {
        throw new NoActiveTraceException(traceEvent.getSourceParty()); // custom
    } else {
        traceEvent.setTrace(trace);
        trace.getTraceEvents().add(traceEvent); // returns set to which I can add
        // there is a Cascade.All on the @OneToMany set

        traceDao.create(traceEvent);
    }
}

// ...in traceDao
@Override
public Long create(TraceEvent traceEvent) {
    getCurrentSession().persist(traceEvent);
    return traceEvent.getTraceEventId();
}

Trace and TraceEvent are entities managed by Hibernate.

When I tested this code during development, I never had any problems. In production though, from time to time, it'll throw

org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session: [myobject#2664] 
at org.hibernate.event.internal.AbstractSaveEventListener.performSave(AbstractSaveEventListener.java:180) ~[hibernate-core-4.1.9.Final.jar:4.1.9.Final]
at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:136) ~[hibernate-core-4.1.9.Final.jar:4.1.9.Final]
at org.hibernate.event.internal.DefaultPersistEventListener.entityIsTransient(DefaultPersistEventListener.java:208) ~[hibernate-core-4.1.9.Final.jar:4.1.9.Final]
at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:151) ~[hibernate-core-4.1.9.Final.jar:4.1.9.Final]
at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:78) ~[hibernate-core-4.1.9.Final.jar:4.1.9.Final]
at org.hibernate.internal.SessionImpl.firePersist(SessionImpl.java:843) ~[hibernate-core-4.1.9.Final.jar:4.1.9.Final]
at org.hibernate.internal.SessionImpl.persist(SessionImpl.java:818) ~[hibernate-core-4.1.9.Final.jar:4.1.9.Final]
at org.hibernate.internal.SessionImpl.persist(SessionImpl.java:822) ~[hibernate-core-4.1.9.Final.jar:4.1.9.Final]
at com.package.model.dao.impl.TraceDaoHibernateImpl.create(TraceDaoHibernateImpl.java:116) ~[TraceDaoHibernateImpl.class:na]

when trying to persist it. I can't think of any difference between prod and dev that would cause this. This procedure is independent of everything (and runs within a transaction).

I understand that once I call .add(traceEvent), it dirties the PersistentSet (and might persist the traceEvent). And then when I call create(traceEvent), it might try to persist it as well, thereby adding the same object in the Hibernate Session.

Why is the exception occurring and why is it happening at random intervals like this?

Edit Entities

@Entity
@Table(name = "TRACE_EVENT")
@Inheritance(strategy = InheritanceType.JOINED)
public abstract class TraceEvent {
    @Id
    @GenericGenerator(name = "generator", strategy = "increment")
    @GeneratedValue(generator = "generator")
    @Column(name = "trace_event_id")
    private Long traceEventId;

    @Column(nullable = false)
    private Calendar transmissionDate;

    @Column(nullable = false)
    private String sourceParty; 

    @Column
    private String destinationParty;

    @ManyToOne(optional = false)
    @JoinColumn(name = "trace_id")
    private Trace trace;

And

@Entity
@Table(name = "TRACE")
public class Trace {
    @Id
    @GenericGenerator(name = "generator", strategy = "increment")
    @GeneratedValue(generator = "generator")
    @Column(name = "trace_id")
    private Long traceId;

    @Column(nullable = false)
    private String number;

    @Column(nullable = false)
    private Calendar startDate;

    @Column
    private Calendar endDate;   

    @Column(nullable = false)
    private Boolean isActive = false; // default value of false

    // USED TO BE CASCADE.ALL
    @OneToMany(fetch = FetchType.LAZY, cascade = {CascadeType.REMOVE}, mappedBy = "trace")
    private Set<TraceEvent> traceEvents = new HashSet<>();  

    @ManyToOne
    private Account account;
Sotirios Delimanolis
  • 274,122
  • 60
  • 696
  • 724
  • Have you checked [Hibernate Error: org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session](http://stackoverflow.com/questions/1074081/hibernate-error-org-hibernate-nonuniqueobjectexception-a-different-object-with)? – c.s. Jul 19 '13 at 19:02
  • @c.s Only looked at the top answers. This probably has to do with some Cascading but I still don't understand why it occurs sporadically. – Sotirios Delimanolis Jul 19 '13 at 19:08
  • Could you post more details about the entities? The hbm file or annotated class would be useful. – Joe Jul 19 '13 at 20:11
  • @Joe Yup, posted the entities. – Sotirios Delimanolis Jul 19 '13 at 20:43
  • What's the underlying SQL implementation and is it in a cluster? I get the feeling this actually isn't a Cascading issue but is a flushing issue. The generator strategy probably shouldn't be increment but should be either table or native. – Joe Jul 22 '13 at 14:41
  • @Joe It's a MySQL database and there is no cluster. – Sotirios Delimanolis Jul 22 '13 at 14:44
  • I'd switch the generator strategy to native or identity and the issue should go away. – Joe Jul 22 '13 at 16:44
  • @Joe Do you have an explanation for that? – Sotirios Delimanolis Jul 22 '13 at 17:35
  • From the reference docs: increment "generates identifiers of type long, short or int that are unique only when no other process is inserting data into the same table." If Cascading was the real cause, the exception wouldn't be random and should be able to be re-created. Since the error is random, it would lead me to think you're running into two processes that just happen to collide. – Joe Jul 22 '13 at 20:39
  • There are possibly many Threads adding rows, but certainly not many processes. – Sotirios Delimanolis Jul 22 '13 at 20:42

1 Answers1

1

Its been some time since I 've last worked with Hibernate and the problem is of a difficult nature so I will just describe various options:

  • Since your Trace object has a Cascade.All option shouldn't it be enough to just add the TraceEvent on the set and call session.save(trace)? As it is now, my impression is that when the session closes it will try to save the Trace object. Alternatively avoid modifying the set. Just save the TraceEvent. You don't return Trace to be used anywere so there is no point in modifying it. Next time it will be used, it will probably be loaded from the database. All the above behaviour might depend on your configuration but it might worth taking a look.

  • Do you use some sort of hibernate cache? Could it be that it happens that a Trace object might not be the most recent version?

  • How TraceEvent is created? Could it be that this object is detached (loaded in a previous request and re-used now?) It seems rather unlikely but please check.

  • This seems like a function that could be called from many places (i.e. to record a user's action). Is it possible that one of those other places are using the same session and possibly modifying it?

  • Assuming that it is called from many places, were you able from your logs to link the error with some specific business functionality (e.g. when the user clicks that link/button etc)?

  • If all else fail why don't you try to add an EventListener or an Interceptor on persist event next time you will update your production environment (I understand that you cannot replicate this error easily). It would be a simple one that would catch NonUniqueObjectException it could print additional log info, and then just re-throw

c.s.
  • 4,786
  • 18
  • 32
  • I'm going to try your first and last points. For the others: I don't use a Hibernate cache. The TraceEvent is created POJO style. It all happens within that `@Transactional` block that happens in a scheduled executor. I've also changed `Cascade.All` to `Cascade.Remove`. – Sotirios Delimanolis Jul 19 '13 at 20:47