I found some multithreaded code in the quite popular LazyCache library, that uses an int[]
field as a granular locking mechanism, with the intention to prevent concurrent invocation of a method with the same key
as argument. I am highly skeptical about the correctness of this code, because there is no Interlocked
or Volatile
operation used when exiting the protected region. Here is the important part of the code:
private readonly int[] keyLocks;
public virtual T GetOrAdd<T>(string key, Func<ICacheEntry, T> addItemFactory,
MemoryCacheEntryOptions policy)
{
/* Do stuff */
object cacheItem;
// acquire lock per key
uint hash = (uint)key.GetHashCode() % (uint)keyLocks.Length;
while (Interlocked.CompareExchange(ref keyLocks[hash], 1, 0) == 1) Thread.Yield();
try
{
cacheItem = CacheProvider.GetOrCreate<object>(key, CacheFactory);
}
finally
{
keyLocks[hash] = 0;
}
/* Do more stuff */
}
The protected method call is the CacheProvider.GetOrCreate<object>(key, CacheFactory)
. It is supposed to be called by one thread at a time, for the same key
. For entering the protected region there is while
loop that uses the Interlocked.CompareExchange
to change a value of the keyLocks
array from 0
to 1
. So far so good. The part that concerns me is the line that exits the protected region: keyLocks[hash] = 0;
. Since there is no barrier there, my understanding is that the C# compiler and the .NET Jitter are free to move instructions in either direction, stepping over this line. So an instruction inside the CacheProvider.GetOrCreate
method can be moved after the keyLocks[hash] = 0;
.
My question is: according to the specs, does the code above really ensure that the CacheProvider.GetOrCreate
will not be called concurrently with the same key? Is the promise of mutual exclusion fulfilled by this code? Or the code is just buggy?
Context: The relevant code was added in the library in this pull request: Optimize cache to lock per key.