1

I am involved in development of Xlet using Java 1.4 API.

The docs say Xlet interface methods (those are actually xlet life-cycle methods) are called on its special thread (not the EDT thread). I checked by logging - this is true. This is a bit surprising for me, because it is different from BB/Android frameworks where life-cycle methods are called on the EDT, but it's OK so far.

In the project code I see the app extensively uses Display.getInstance().callSerially(Runnable task) calls (this is an LWUIT way of running a Runnable on the EDT thread).

So basically some pieces of code inside of the Xlet implementation class do create/update/read operations on xlet internal state objects from EDT thread and some other pieces of code do from the life-cycle thread without any synchronization (including that state variables are not declared as volatile). Smth like this:

class MyXlet implements Xlet {

    Map state = new HashMap();

    public void initXlet(XletContext context) throws XletStateChangeException {
        state.put("foo", "bar"); // does not run on the EDT thread

        Display.getInstance().callSerially(new Runnable() {
            public void run() {
                // runs on the EDT thread
                Object foo = state.get("foo");
                // branch logic depending on the got foo
            }
        });
    }

    ..
}

My question is: does this create a background for rare concurrency issues? Should the access to the state be synchronized explicitly (or at least state should be declared as volatile)?

My guess is it depends on whether the code is run on a multy-core CPU or not, because I'm aware that on a multy-core CPU if 2 threads are running on its own core, then variables are cached so each thread has its own version of the state unless explicitly synchronized.

I would like to get some trustful response on my concerns.

Vit Khudenko
  • 28,288
  • 10
  • 63
  • 91

1 Answers1

2

Yes, in the scenario you describe, the access to the shared state must be made thread safe.

There are 2 problems that you need to be aware of:

The first issue, visability (which you've already mentioned), can still occur on a uniprocessor. The problem is that the JIT compiler is allowed to cache varibles in registers and on a context switch the OS will most likely dump the contents of the registers to a thread context so that it can be resumed later on. However, this is not the same as writing the contents of the registers back to the fields of an object, hence after a context switch we can not assume that the fields of an object is up to date.

For example, take the follow code:

class Example {
    private int i;

    public void doSomething() {
        for (i = 0; i < 1000000; i ++) {
            doSomeOperation(i);
        }
    }
}

Since the loop variable (an instance field) i is not declared as volatile, the JIT is allowed to optimise the loop variable i using a CPU register. If this happens, then the JIT will not be required to write the value of the register back to the instance variable i until after the loop has completed.

So, lets's say a thread is executing the above loop and it then get's pre-empted. The newly scheduled thread won't be able to see the latest value of i because the latest value of i is in a register and that register was saved to a thread local execution context. At a minimum the instance field i will need to be declared volatile to force each update of i to be made visible to other threads.

The second issue is consistent object state. Take the HashMap in your code as an example, internally it is composed of several non final member variables size, table, threshold and modCount. Where table is an array of Entry that forms a linked list. When a element is put into or removed from the map, two or more of these state variables need to be updated atomically for the state to be consistent. For HashMap this has to be done within a synchronized block or similar for it to be atomic.

For the second issue, you would still experience problems when running on a uniprocessor. This is because the OS or JVM could pre-emptively switch threads while the current thread is part way through executing the put or remove method and then switch to another thread that tries to perform some other operation on the same HashMap.

Imagine what would happen if your EDT thread was in the middle of calling the 'get' method when a pre-emptive thread switch occurs and you get a callback that tries to insert another entry into the map. But this time the map exceeds the load factor causing the map to resized and all the entries to be re-hashed and inserted.

Nam San
  • 1,145
  • 9
  • 13
  • In my case all the data is put to that `HashMap` guaranteedly BEFORE it is read on the EDT (that's by design of the app initialization). So, if I get you correctly, the only issue is visibility which can only happen on a multy-core CPU. Do I get you right? – Vit Khudenko Nov 19 '12 at 09:23
  • Nam San, please check my comment above (I forgot to direct it to you using `@user` notation). – Vit Khudenko Nov 20 '12 at 18:58
  • @Arhimed, Note that the call to `Display.getInstance().callSerially()` will return immediately. If your `initXlet()` method does nothing else then your life cycle thread will exit and your Xlet will be assumed to be in the 'Paused' state. At this point your Xlet's other life cycle callbacks such as `startXlet()` could be called. If your other life cycle callback methods modify the `state` `HashMap` then you could potentially hit thread safety issues that I described above. Hence, weather your Xlet is thread safe or not will depend on what it does in the other life cycle methods. – Nam San Nov 22 '12 at 17:30
  • Yes I understand that `callSerially()` returns immediately, and that if in `startXlet()` the state is changed, then it is unsafe. But, let's suppose the code is as simple as it is described in my post. In this case, if I get you correctly, the only issue is visibility which can only happen on a multy-core CPU. Do I get you right? – Vit Khudenko Nov 23 '12 at 09:11
  • @Arhimed, Since you've asked the visibility question so persistently, I've been considering this issue more carefully. My conclusion is that even on a single CPU system, it is still possible to run into visibility issues. The reason is because the JIT compiler is allowed to optimise operations using registers and on context switches to other threads, the state of the registers will get save but this does not necessarily flush the contents of the registers back to the fields in an object. I'll modify my answer with an example, comments are too short to describe the problem fully. – Nam San Nov 23 '12 at 16:08
  • Thanks for your time! However I am still in doubt.. Looks like the best way to find the answer is to study Java Memory Model specs. :) – Vit Khudenko Nov 23 '12 at 16:46
  • @Arhimed, Which part are you still concerned about? I'm guessing you are concerned about visibility and not atomicity. If it is the visibility on uniprocessor aspect that concerns you then I think reading about compiler optimizations and pre-emptive schedulers will be more beneficial than reading about the JMM. The JMM only describes the minimum acceptable behavior that a JVM must implement and developers can assume. It specifically does not describe implementation detail so that JVM implementations are free to implement the JMM in the way that they see is most appropriate. – Nam San Nov 23 '12 at 17:19