When I retrieve data from DB, result set contains duplicates - exactly the same objects (IDs also are the same). I'd like to know what's the reason and how to deal with it. As I mention in title, DB tables include only unique rows.
2 Answers
I guess your kittens table is a collection property for two different tables, if it is, you have to add this annotation above your collection field
@Fetch(FetchMode.SUBSELECT)

- 459
- 3
- 13
This would happen in case, that you create joins on related collections. For example, if we do follow the doc:
17.4. Associations
List cats = sess.createCriteria(Cat.class)
.add( Restrictions.like("name", "F%") )
.createCriteria("kittens")
.add( Restrictions.like("name", "F%") )
.list();
we join cat and its kittens. If there are more kittens related to one cat, starting with F% we get the result:
Cat 1 - Kitten 1
Cat 1 - Kitten 2
Cat 2 - Kitten 3
Cat 2 - Kitten 4
The way to avoid this is to:
- do not join collection - use batch loading
- use ResultTransformer
Also check:
small cite:
Using batch fetching, Hibernate can load several uninitialized proxies if one proxy is accessed. Batch fetching is an optimization of the lazy select fetching strategy. There are two ways you can configure batch fetching: on the class level and the collection level.
Batch fetching for classes/entities is easier to understand. Consider the following example: at runtime you have 25 Cat instances loaded in a Session, and each Cat has a reference to its owner, a Person. The Person class is mapped with a proxy, lazy="true". If you now iterate through all cats and call getOwner() on each, Hibernate will, by default, execute 25 SELECT statements to retrieve the proxied owners. You can tune this behavior by specifying a batch-size in the mapping of Person:
<class name="Person" batch-size="10">...</class>
With batch size, our collections are loaded lazily (not as part of the main, root query), but they do not trigger 1 + N issue, because they are loaded in batches. I would vote for this solution. The root query then could be easily used for paging (no duplicates)

- 1
- 1

- 122,561
- 47
- 239
- 335