7

Due to remote invocation nature of REST services, they are in constant situation to run into race condition with each other. One of the everyday resources to race for is session. In order to be practical, you need to be able to put a lock over the resource at the beginning of your process and lift it up whenever you are done with it.

Now my question is, does Spring Session have any feature to deal with race condition over the session entries?

Or any other library / framework in Java!!!

Mehran
  • 15,593
  • 27
  • 122
  • 221

2 Answers2

3

If you're using Spring Controllers, then you can use

RequestMappingHandlerAdapter.setSynchronizeOnSession-boolean-

This will make every Controller method synchronized in presence of a session.

HttpSession.setAttribute is thread safe. However getAttribute followed by setAttribute has to be manually made tread safe.

synchronized(session) {
    session.setAttribute("foo", "bar");
    session.getAttribute("foo");
}

Same can be done in case of spring session beans.

synchronized(session) {
    //do something with the session bean
}

#Edit

In case of multiple containers with normal spring session beans you would have to use sticky sessions. That would ensure that one session state is stored on one container and that container is accessed every single time the same session is requested. This has to be done on the load balancer with the help of something like BigIP cookies. Rest would would work the same way as for a single session there exists a single container, so locking session would suffice.

If you would like to use session sharing across instances there are supports on the containers like Tomcat and Jetty

These approaches use a back-end database or some other persistence mechanism to store state.

For the same purpose you can try using Spring Session. Which is trivial to configure with the Redis. Since Redis is single threaded, it ensures that one instance of an entry is accessed atomically.

Above approaches are non invasive. Both the database and Redis based approaches support transactions.

However if you want more control over the distributed state and locking you can try using the distributed data grids like Hazelcast and Gemfire.

I have personally worked with the Hazelcast and it does provide methods to lock entries made in the map.

#Edit2

Though I believe that handling transactions should suffice with Spring Session and Redis, to make sure you would need distributed locking. Lock object would have to be acquired from the Redis itself. Since Redis is single threaded a personal implementation would also work by using something like INCR

Algorithm would go something like below

//lock_num is the semaphore/lock object

lock_count = INCR lock_num
while(true) {
    if(lock_count != 1) {
        DECR lock_num
    } else {
        break
    }
    wait(wait_time_period)
}

//do processing in critical section

DECR lock_num

However, thankfully Spring already provides this distributed lock implementation via RedisLockRegistry. More documentation on usage is here.

If you decide to use plain Jedis without spring then here is a distributed lock as for Jedis : Jedis Lock.

//from https://github.com/abelaska/jedis-lock
Jedis jedis = new Jedis("localhost");
JedisLock lock = new JedisLock(jedis, "lockname", 10000, 30000);
lock.acquire();
try {
  // do some stuff
}
finally {
  lock.release();
}

Both of these should work exactly like Hazelcast locking.

11thdimension
  • 10,333
  • 4
  • 33
  • 71
  • Your solution is valid in case there's only one instance of web container / running code. What if I decide to use a load balancer and multiple instances of my code? Do you have any suggestion for that? Also you are locking the whole session, what if I want to lock only one entry of the session? – Mehran Apr 02 '16 at 12:02
  • Updated my answer, take a look. – 11thdimension Apr 02 '16 at 19:36
  • Thank you for your time, but here are a couple of issues: 1/ Sticky sessions: I know it works but not a big fan as it's not a great solution for evenly distributing load. A true load distribution solution should scatter requests randomly over the servers. – Mehran Apr 04 '16 at 12:52
  • 2/ Spring Session+Redis: I'm using this solution right now but I think your suggestion to use transactions is not gonna work as they are implemented by grouping a bunch of commands and running them as an atomic one in Redis. And this is useless as I need to get a value out of Redis at the beginning of the request, and set it back later on which means these two can not be executed as one Redis command (I need to execute time consuming stuff in between). And one more problem with this solution is that the Redis connection is managed by Spring Session and the Redis connection is out of my reach. – Mehran Apr 04 '16 at 12:55
  • 3/ It seems Hazelcast could be the right choice for my problem but again it is not like locking a session entry. I mean it can be used as a secondary in-memory storage which I have to communicate with through interfaces other than Spring Session's API. Right now I'm using Redisson which I believe provides same kind of features as Hazelcast and I was hoping to find a solution compatible with Spring Session, so far it seems there's none. – Mehran Apr 04 '16 at 13:03
  • Added more information, it's not about if we're fan of something it's about what works for a given problem. Sticky session have been used for a long time and I have seen them work in many commercial products. Other solutions are new and more effective. Pick whichever suits in your context best. – 11thdimension Apr 05 '16 at 01:27
3

As a previous answer stated. If you are using Spring Session and you are concerned for thread safety on concurrent access of a Session, you should set:

RequestMappingHandlerAdapter.setSynchronizeOnSession(true);

One example can be found here EnableSynchronizeOnSessionPostProcessor :

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.config.BeanPostProcessor;
import org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter;

public class EnableSynchronizeOnSessionPostProcessor implements BeanPostProcessor {
    private static final Logger logger = LoggerFactory
        .getLogger(EnableSynchronizeOnSessionPostProcessor.class);

    @Override
    public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
        // NO-OP
        return bean;
    }

    @Override
    public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
        if (bean instanceof RequestMappingHandlerAdapter) {
            RequestMappingHandlerAdapter adapter = (RequestMappingHandlerAdapter) bean;
            logger.info("enable synchronizeOnSession => {}", adapter);
            adapter.setSynchronizeOnSession(true);
        }
        return bean;
    }
}

Sticky Sessions and Session Replication

With regards to a clustered application and Sessions, there is a very good post here on SO, that discusses this topic: Sticky Sessions and Session Replication

In my experience, you would want both Sticky Session and Session replication. You use sticky session to eliminate the concurrent Session access across nodes, because sticky session will pin a session to a single node and each subsequent request for the same session will always be directed to that node. This eliminates the cross-node session access concern.

Replicated sessions are helpful mainly in case a node goes down. By replicating sessions, when a node goes down, future requests for existing sessions will be directed to another node that will have a copy of the original session and makes the fail over transparent to the user.

There are many frameworks that support session replication. The one I use for large projects is the open-source Hazelcast.

In response to your comments made on @11thdimension post:

I think you are in a bit of a challenging area. Basically, you want to enforce all session operations to be atomic across nodes in a cluster. This leads me to lean towards a common session store across nodes, where access is synchronized (or something similar).

Multiple Session store / replication frameworks surely support an external store concept and I am sure Reddis does. I am most familiar with Hazelcast and will use that as an example.

Hazelcast allows to configure the session persistence to use a common database. If you look at Map Persistence section, it shows an example and a description of options.

The description for the concept states:

Hazelcast allows you to load and store the distributed map entries from/to a persistent data store such as a relational database. To do this, you can use Hazelcast's MapStore and MapLoader interfaces.

Data store needs to be a centralized system that is accessible from all Hazelcast Nodes. Persistence to local file system is not supporte

Hazelcast supports read-through, write-through, and write-behind persistence modes which are explained in below subsections.

The interesting mode is write-through:

Write-Through

MapStore can be configured to be write-through by setting the write-delay-seconds property to 0. This means the entries will be put to the data store synchronously.

In this mode, when the map.put(key,value) call returns:

MapStore.store(key,value) is successfully called so the entry is persisted. In-Memory entry is updated. In-Memory backup copies are successfully created on other JVMs (if backup-count is greater than 0). The same behavior goes for a map.remove(key) call. The only difference is that MapStore.delete(key) is called when the entry will be deleted.

I think, using this concept, plus setting up your database tables for the store properly to lock entries on insert/update/deletes, you can accomplish what you want.

Good Luck!

Community
  • 1
  • 1
pczeus
  • 7,709
  • 4
  • 36
  • 51
  • Not to repeat myself, could you please read the comments on the @11thdimension post!? They also apply to your solutions. So far I would say that the answer to my question is that "you can not". Thanks. – Mehran Apr 04 '16 at 13:17
  • I'm not sure if I get it! How does your `write-through` proposition fit into the picture? I think it provides an atomic write which is totally different from having a lock on the entry. In order to eliminate race condition on a session entry, these steps are needed: `1.Acquire lock` `2.Read entry` `3.Process` `4.Write entry back` `5.Unlock`. And if I understood your `write-through` solution, it only satisfies an atomic way to do the step #4. But concurrent request accessing the same session entry should block each other at the beginning which makes their access sequential (which is inevitable) – Mehran Apr 05 '16 at 00:52
  • You can set setSynchronizeOnSession(true); to handle concurrent access on a single node in the cluster. And the atomic write at the persistent store insures that a 'first come-first server' is handle across nodes in the cluster. There is no benefit or point to block node 2 while nod 1 is updating a session attribute, as the end result is the same. As soon as node 1 completes the update, node 2 will apply it's update anyway. – pczeus Apr 05 '16 at 01:59
  • So, the last possibility would be that you want node 1 to complete its update and node 2 sees the change by doing a read, to make a decision of updating. That can be handle by read locking in a transaction on the store and wrapping the entire session modification logic in a transaction. – pczeus Apr 05 '16 at 01:59
  • In summary, by synchronizing session access per node + leveraging a database persisted store with some transaction handling, you should be able to accomplish this. – pczeus Apr 05 '16 at 02:00
  • So you mean Hazelcast will keep the transaction open? Could you please translate the five steps in terms of Hazelcast API? Thanks. – Mehran Apr 05 '16 at 02:06