Consider the below code
public static Singleton getInstance()
{
if (instance == null)
{
synchronized(Singleton.class) { //1
if (instance == null) //2
instance = new Singleton(); //3
}
}
return instance;
}
The theory behind double-checked locking is that the second check at //2 makes it impossible for two different Singleton objects to be created as occurred in Listing
Consider the following sequence of events:
Thread 1 enters the getInstance() method.
Thread 1 enters the synchronized block at //1 because instance is null.
Thread 1 is preempted by thread 2.
Thread 2 enters the getInstance() method.
Thread 2 attempts to acquire the lock at //1 because instance is still null. However, because thread 1 holds the lock, thread 2 blocks at //1.
Thread 2 is preempted by thread 1.
Thread 1 executes and because instance is still null at //2, creates a Singleton object and assigns its reference to instance.
Thread 1 exits the synchronized block and returns instance from the getInstance() method.
Thread 1 is preempted by thread 2.
Thread 2 acquires the lock at //1 and checks to see if instance is null.
Because instance is non-null, a second Singleton object is not created and the one created by thread 1 is returned.
The theory behind double-checked locking is perfect. Unfortunately, reality is entirely different. The problem with double-checked locking is that there is no guarantee it will work on single or multi-processor machines.
The issue of the failure of double-checked locking is not due to implementation bugs in JVMs but to the current Java platform memory model. The memory model allows what is known as "out-of-order writes" and is a prime reason why this idiom fails.