17

From documentation

If we have a case where we need to insert 1000 000 rows/objects:

Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();

for ( int i=0; i<100000; i++ ) {
    Customer customer = new Customer(.....);
    session.save(customer);
    if ( i % 20 == 0 ) { //20, same as the JDBC batch size
        //flush a batch of inserts and release memory:
        session.flush();
        session.clear();
    }
}

tx.commit();
session.close();

Why we should use that approach? What kind of benefit it brings us comparing to StatelessSession one:

    StatelessSession session = sessionFactory.openStatelessSession();
    Transaction tx = session.beginTransaction();

    for ( int i=0; i<100000; i++ ) {
      Customer customer = new Customer(.....);
      session.insert(customer);
    }    

    tx.commit();
    session.close();

I mean, this ("alternative") last example does not use memory, no need to synchronize, clean out of the cache, then this supposed to be best practice for cases like this? Why to use previous one then?

ses
  • 13,174
  • 31
  • 123
  • 226
  • 1
    Your two examples do completely different things - the first inserts a load of new objects, the second queries some existing objects up and then updates them. It would illustrate your question more clearly if they did the same things. – Tom Anderson Jan 05 '13 at 17:22
  • Hi there @ses! Had a similar question. Accidentally found your post :) – pubsy Oct 26 '16 at 08:26

3 Answers3

11

From the documentation you link to:

In particular, a stateless session does not implement a first-level cache nor interact with any second-level or query cache. It does not implement transactional write-behind or automatic dirty checking. Operations performed using a stateless session never cascade to associated instances. Collections are ignored by a stateless session. Operations performed via a stateless session bypass Hibernate's event model and interceptors. Due to the lack of a first-level cache, Stateless sessions are vulnerable to data aliasing effects.

Those are some significant limitations!

If the objects you're creating, or the modifications you're making, are simple changes to scalar fields of individual objects, then i think that a stateless session would have no disadvantages compared to a batched normal session. However, as soon as you want to do something a bit more complex - manipulate a collection-valued property of an object, or another object which is cascaded from the first, say - then the stateless session is more a hindrance than a help.

More generally, if the batched ordinary session gives performance that is good enough, then the stateless session is simply unnecessary complexity. It looks vaguely like the ordinary session, but it has a different API and different semantics, which is the sort of thing that invites bugs.

There can certainly be cases where it is the appropriate tool, but i think these are the exception rather than the rule.

Tom Anderson
  • 46,189
  • 17
  • 92
  • 133
  • 5
    But why I need first level cache if I'm inserting 10000 records in one second and then clean this cache immaterially, nobody could have chance to use it, I guess, for that short moment? I guess it requires some good example to understand it... In short, as I understand: stateless session is good for simplest cases, for not complex objects to be saved. – ses Jan 05 '13 at 17:33
  • 2
    Also, I ve found some explanation which proves your point: http://javainnovations.blogspot.ca/2008/07/batch-insertion-in-hibernate.html – ses Jan 05 '13 at 17:39
1

Stateless Session has an advantage over Session in terms of performance because stateless session will skip the transaction commit to session or session flush methods used in Session object. However, it is important to note that the service/DAO should NOT try to perform in-session data manipulation to either parent or any child object. It will throw exception. Also, make sure to close the session explicitly otherwise one will end up with leaked connections.

To gain more performance with Stateless session, if one is using Spring driven transaction, mark the Spring transaction as read only and set the propagation required as NEVER.

But again, do not try this where one has to manipulate the object model.

@Transactional(value="someTxnManager", readOnly=true, propagation=Propagation.NEVER)
    public List<T> get(...) {

        return daoSupport.get(...);
    }

in daoSupport

StatelessSession session = sessionFactory.openStatelessSession();
try{
// do all operations here
}
...
...
finally{
            session.close();
}
Anky
  • 56
  • 5
  • It skips commit and flush? Are you sure? – Jess Jan 12 '16 at 17:51
  • In Spring ``@Transactional`` based on AOP, in your case ``session.close()`` calling before transaction was committed, this will result in an exception – borino Jan 20 '18 at 09:00
-10

StatelessSession does not support Batch Processing.
I have seen this in documentation if i am not wrong
Features and behaviors not provided by StatelessSession
• a first-level cache
• interaction with any second-level or query cache
• transactional write-behind or automatic dirty checking

and Batch processing happens using caches
Forgive me if i am wrong.

MyStack
  • 103
  • 2
  • 12
  • 4
    Your answer is incorrect - StatelessSessions are the preferred choice for batch processing using NHibernate. Have a look at http://ayende.com/blog/4137/nhibernate-perf-tricks for more information – Rich O'Kelly Sep 23 '13 at 12:07