23

I've got those two classes

MyItem Object:

@Entity
public class MyItem implements Serializable {

    @Id
    private Integer id;
    @ManyToOne(cascade = {CascadeType.PERSIST, CascadeType.MERGE})
    private Component defaultComponent;
    @ManyToOne(cascade = {CascadeType.PERSIST, CascadeType.MERGE})
    private Component masterComponent;

    //default constructor, getter, setter, equals and hashCode
}

Component Object:

@Entity
public class Component implements Serializable {

    @Id
    private String name;

    //again, default constructor, getter, setter, equals and hashCode
}

And I'm tring to persist those with the following code:

public class Test {

    public static void main(String[] args) {
        Component c1 = new Component();
        c1.setName("comp");
        Component c2 = new Component();
        c2.setName("comp");
        System.out.println(c1.equals(c2)); //TRUE

        MyItem item = new MyItem();
        item.setId(5);
        item.setDefaultComponent(c1);
        item.setMasterComponent(c2);

        ItemDAO itemDAO = new ItemDAO();
        itemDAO.merge(item);
    }
}

While this works fine with Hibernate 3.6, Hibernate 4.1.3 throws

Exception in thread "main" java.lang.IllegalStateException: An entity copy was already assigned to a different entity.
        at org.hibernate.event.internal.EventCache.put(EventCache.java:184)
        at org.hibernate.event.internal.DefaultMergeEventListener.entityIsDetached(DefaultMergeEventListener.java:285)
        at org.hibernate.event.internal.DefaultMergeEventListener.onMerge(DefaultMergeEventListener.java:151)
        at org.hibernate.internal.SessionImpl.fireMerge(SessionImpl.java:914)
        at org.hibernate.internal.SessionImpl.merge(SessionImpl.java:896)
        at org.hibernate.engine.spi.CascadingAction$6.cascade(CascadingAction.java:288)
        at org.hibernate.engine.internal.Cascade.cascadeToOne(Cascade.java:380)
        at org.hibernate.engine.internal.Cascade.cascadeAssociation(Cascade.java:323)
        at org.hibernate.engine.internal.Cascade.cascadeProperty(Cascade.java:208)
        at org.hibernate.engine.internal.Cascade.cascade(Cascade.java:165)
        at org.hibernate.event.internal.AbstractSaveEventListener.cascadeBeforeSave(AbstractSaveEventListener.java:423)
        at org.hibernate.event.internal.DefaultMergeEventListener.entityIsTransient(DefaultMergeEventListener.java:213)
        at org.hibernate.event.internal.DefaultMergeEventListener.entityIsDetached(DefaultMergeEventListener.java:282)
        at org.hibernate.event.internal.DefaultMergeEventListener.onMerge(DefaultMergeEventListener.java:151)
        at org.hibernate.event.internal.DefaultMergeEventListener.onMerge(DefaultMergeEventListener.java:76)
        at org.hibernate.internal.SessionImpl.fireMerge(SessionImpl.java:904)
        at org.hibernate.internal.SessionImpl.merge(SessionImpl.java:888)
        at org.hibernate.internal.SessionImpl.merge(SessionImpl.java:892)
        at org.hibernate.ejb.AbstractEntityManagerImpl.merge(AbstractEntityManagerImpl.java:874)
        at sandbox.h4bug.Test$GenericDAO.merge(Test.java:79)
        at sandbox.h4bug.Test.main(Test.java:25)

Database backend is h2 (but the same happens with hsqldb or derby). What am I doing wrong?

Clayton Louden
  • 1,056
  • 2
  • 11
  • 28

9 Answers9

27

I had the same problem, and this is what I found:

The merge method traverses the graph of the object that you want to store, and for each object in this graph it loads it from the database, so it has a pair of (persistent entity, detached entity) for each object in the graph, where detached entity is the entity that is going to be stored, and persistent entity is gotten from the database. (In the method, as well as in the error message the persistent entity is known as 'copy'). Then these pairs are put in two maps, one with the persistent entity as key and the detached entity as value, and one with the detached entity as key and the persistent entity as value.

For each such pair of entites, it checks these maps, to see if the persistent entity maps to the same detached entity as before (if it has already been visited), and vice versa. This problem occurs when you get a pair of entities where doing a get with the persistent entity returns a value, but a get from the other map, with the detached entity returns null, which means that you have already linked the persistent entity with a detached entity with a different hashcode (basically the object identifier if you have not overridden the hashcode-method).

TL;DR, you have multiple objects with different object identifiers/hashcode, but with the same persistence identifier (thus referencing the same persistent entity). This is appearantly no longer allowed in newer versions of Hibernate4 (4.1.3.Final and upwards from what I could tell).

The error message is not very good imo, what it really should say is something like:

A persistent entity has already been assigned to a different detached entity

or

Multiple detached objects corresponding to the same persistent entity

Tobb
  • 11,850
  • 6
  • 52
  • 77
  • 1
    Problem still exists with version "4.1.10 Final". Reverting back to version "4.1.2 Final" works though. – Zaki May 29 '13 at 11:22
  • 1
    Ok, I can't really remember which versions I tried besides 4.1.10.Final and 4.1.1.Final (which is the one I ended up reverting to), but if 4.1.2.Final works I'll update my answer to reflect this. – Tobb May 29 '13 at 11:32
  • 1
    Great description of the error. Any thoughts on what we can do about it? – Ben Oct 03 '13 at 13:15
  • 2
    In my case the problem was that I had an object graph that was detached from the entitymanager, serialized, then deserialized and merged to the entitymanager again. But the serialization/deserialization made what used to be references to the same object references to different objects. Not sure if you have the same scenario, but the solution could be to make sure that each persistent object (identified by database id) is the same object (the same object id). In my case the domain was too complex for such a solution, so downgrading the Hibernate version became the safest solution. – Tobb Oct 03 '13 at 17:28
  • If you are not serializing/deserializing, then the exception might be an indication that the code is not doing as it should. So find out which persistent object is being represented by multiple (java) objects, and why. If it is for an acceptable reason, and it can't be circumvented in an easy manner, use downgrading as a last resort. – Tobb Oct 03 '13 at 17:31
  • Is this still valid? I'm using hibernate 4.1.9-Final and the two maps mentioned above are `IdentityHashMap`'s, so hashcodes/equals on the entity classes are not considered when inserting/retrieving from these maps. When `put(K,V)` is called, the hash is computed by calling `System.identityHashCode(obj)` which by-passes any overridden `hashCode()` implementation, so effectively it doesn't matter how hashCode/equals has been implemented. – Paul Jan 27 '16 at 10:03
  • I'm not sure, but from what I understand from your description it sounds like it's even "more valid", meaning that you can't circumvent the error by adding your own equals/hashcode. Now, there needs to be a 1-1 between object ids and database ids in the object graph for merge not to throw such an exception. – Tobb Jan 27 '16 at 10:51
5

Same here, check your equals() method. Most probably is badly implemented.

Edit: I have verified that a merge operation will not work if you don't implement your Entity's equals() and hashCode() methods correctly.

You should follow these guidelines for implementing equals() and hashCode():

http://docs.jboss.org/hibernate/orm/4.1/manual/en-US/html/ch04.html#persistent-classes-equalshashcode

"It is recommended that you implement equals() and hashCode() using Business key equality. Business key equality means that the equals() method compares only the properties that form the business key. It is a key that would identify our instance in the real world (a natural candidate key)"

That means: You should NOT use your Id as part of your equals() implementation!

4

Is your relationship between item and component unidirectional or bidirectional? If it's bidirectional make sure you don't have Cascade.MERGE calls going back up to Item.

Basically, the newer version of Hibernate has a entity map that contains a list of all things that need to be merged based on the call to merge() it will call merge and then move onto the next one, but keep things in the map, it will throw the error you state above "An entity copy was already assigned to a different entity" when it encounters an item that has already been dealt with. We found in our app when we located these "upward" merges in the object graph ie. on bidirectional links, it fixed the merge call.

adam
  • 1,067
  • 11
  • 24
2

Had the same exception (hibernate 4.3.0.CR2) tiring to save a object which is having two copies of a child object, got fixed by, in the entity from :

@OneToOne(cascade = CascadeType.MERGE)
private User reporter;
@OneToOne(cascade = CascadeType.MERGE)
private User assignedto;

to just,

@OneToOne
private User reporter;
@OneToOne
private User assignedto;

i don't know the reason though

Supun Sameera
  • 2,683
  • 3
  • 17
  • 14
0

Try adding the @GeneratedValue annotation under @Id in the Component class. otherwise two different instances might get the same id, and collide.

It seams that you are giving them the same ID.

    Component c1 = new Component();
    c1.setName("comp");
    Component c2 = new Component();
    c2.setName("comp");

That just might solve you'r problem.

Ido.Co
  • 5,317
  • 6
  • 39
  • 64
  • Unfortunately, both Ids are not database generated, but set explicitly. – Clayton Louden May 11 '12 at 12:45
  • Oops, I meant the Component class. you are giving them the same id? – Ido.Co May 11 '12 at 12:46
  • Yes, that's the idea behind it. Both Components get the same id and are equal (see the equals statement above). So cascade should take care of this, right? You could even try to use the same reference (e.g. c1) for both variable (defaultComponent and masterComponent), since they are equal anyway. – Clayton Louden May 11 '12 at 12:48
  • Why do you try to assign two different instances that represent the same DB entity to one class? It would have been better to use the same instance. I think your kind of abusing the DB id@ field. – Ido.Co May 11 '12 at 12:50
  • @RichardPena , If you want these two instances to represent the same DB entity, why not just using one instance? – Ido.Co May 11 '12 at 12:56
0

If name is the Id why are you creating two objects with the same id?? you can use the c1 object in all the code.

If that's only an example and you create the c2 object in another part of the code, then you shouldn't create a new object but load it from database:

c2 = itemDao.find("comp", Component.class); //or something like this AFTER the c1 has been persisted
Frank Orellana
  • 1,820
  • 1
  • 22
  • 29
0

According to logic in EventCache all entities in object graph should be unique. So the best solution(or is it work around?) is to remove cascade in MyItem to Component. And merge Component separately if it's really needed - I would bet that in 95% of cases Component shouldn't been merged according to business logic.

From other hand - I really interested to know the real thoughts behind that restriction.

Andrej Urvantsev
  • 466
  • 4
  • 15
0

if you are using jboss EAP 6.. change it to jboss 7.1.1 .this is a bug of jboss EAP 6. https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.3/html/6.3.0_Release_Notes/ar01s07s03.html

Ekici
  • 11
  • 3
0

I had the same problem, just solved it. While the above answers may solve the problem I disagree with a few of them, especially altering the implemented equlas() and hashcode() methods. However I feel like my answer reinforces @Tobb and @Supun s' answer(s).

On my Many side (child side) I had

 @OneToMany(mappedBy = "authorID", cascade =CascadeType.ALL, fetch=FetchType.EAGER)
 private Colllection books;

And on my one side (parent side)

 @ManyToOne(cascade =CascadeType.ALL)
 private AuthorID authorID;

After reading the excellent top answer provided by @Tobb and a little bit of thinking I realized the annotations didn't make sense. The way I understand it (in my case) I was merging() the Author object and merging() the book Object. But because the book collection is a component of the Author object it was trying to save it twice. My solution was to change the cascade types to:

  @OneToMany(mappedBy = "authorID", cascade =CascadeType.PERSIST, fetch=FetchType.EAGER)
  private Collection bookCollection;

and

 @ManyToOne(cascade =CascadeType.MERGE)
 private AuthorID authorID;

To make a long story short, Persist the parent object and merge the child object.

Hope this helps/makes sense.