0

I have a static dictionary that I want to use as an internal cache for an ASP.NET application. The number of reads will greatly outnumber the number of writes, and I'd like to make sure I do this in a thread-safe manner without unnecessarily hurting performance with extraneous locks.

I have two implementations - the first uses a simple lock object and the lock keywork, and the second uses a ReadWriteLockSlim.

Standard locking

public class ConcurrentCache
{
    private static readonly object LocationsLock = new object();

    private static bool _locationsLoaded = false;

    private static readonly ConcurrentDictionary<long, Location> Locations = 
        new ConcurrentDictionary<long, Location>();

    public Location Get(long id)
    {
        EnsureLocationsDictionaryIsPopulated();
        return Locations[id];
    }

    private void EnsureLocationsDictionaryIsPopulated()
    {
        if (Locations.Count > 0 || _locationsLoaded) return;

        // Still locking, even though I'm using a ConcurrentDictionary,
        // so that all locations are loaded only once and I don't have
        // to worry about locking the reads.
        lock (LocationsLock)
        {
            if (Locations.Count > 0 || _locationsLoaded) return;
            PopulateLocationsDictionary();
            _locationsLoaded = true;
        }
    }

    // see below for other methods
}

ReadWriteLockSlim locking

public class ReadWriteCache
{
    private static readonly ReaderWriterLockSlim LockSlim =
        new ReaderWriterLockSlim();

    private static bool _locationsLoaded = false;

    private static readonly Dictionary<long, Location> Locations =
        new Dictionary<long, Location>();

    public Location Get(long id)
    {
        EnsureLocationsDictionaryIsPopulated();
        return Locations[id];
    }

    private void EnsureLocationsDictionaryIsPopulated()
    {
        if (Locations.Count > 0 || _locationsLoaded) return;
        LockSlim.EnterWriteLock();
        try
        {
            if (Locations.Count > 0 || _locationsLoaded) return;
            PopulateLocationsDictionary();
            _locationsLoaded = true;
        }
        finally
        {
            LockSlim.ExitWriteLock();
        }
    }

    // see below for other methods
}

Both classes have the same two methods:

    private void PopulateLocationsDictionary()
    {
        var items = LoadAllLocationsFromExternalSource();
        if (items == null || items.Count == 0) return;

        for (int i = 0; i < items.Count; i++)
        {
            var location = items[i];
            Locations[location.Id] = location;
        }
    }

    /// <summary>
    /// This method actually calls an external API and takes
    /// at least 5 seconds to run.
    /// </summary>
    /// <returns></returns>
    private List<Location> LoadAllLocationsFromExternalSource()
    {
        return new List<Location>
        {
            new Location
            {Id = 5, Value1 = "one", Value2 = "two", Value3 = "three"},
            new Location
            {Id = 10, Value1 = "I", Value2 = "II", Value3 = "III"},
            new Location
            {Id = 42, Value1 = "un", Value2 = "deux", Value3 = "trois"}
        };
    }

I've seen from this post (When is ReaderWriterLockSlim better than a simple lock?) that the ReadWriteLockSlim is expected to outperform a standard lock when the access pattern involves mostly reads. Is that still the case in my two scenarios? How does the ReadWriteLockSlim compare to a ConcurrentDictionary? Are there any other considerations I'm not taking into account?

Community
  • 1
  • 1
Brian Oliver
  • 1,407
  • 2
  • 15
  • 25
  • Have you considered the existing `MemoryCache` or `Cache` classes? – Preston Guillot Jun 17 '14 at 05:47
  • Why don't you just populate the dictionary before starting to access it? In that way you can get rid of the lock. – jgauffin Jun 17 '14 at 07:18
  • @jgauffin - This is in an ASP.NET application, and the external API I'm calling requires that it be executed in the context of a request. With these approaches I was aiming for a "first access incurs the long load time" pattern. – Brian Oliver Jun 17 '14 at 11:51
  • @PrestonGuillot - I had considered the Cache classes, but I want to ensure these items are never evicted from the cache (I know, I can set that priority on the cache, so that's not as much of an issue) and also there will be some read patterns where I need to enumerate over all entries and get items where one of the non-key fields matches a given search predicate. – Brian Oliver Jun 17 '14 at 11:54

1 Answers1

1

I would suggest using the simplest approach (in this case just a ConcurrentDictionary with no additional locks). The ConcurrentDictionary class was designed for exactly what you have in mind.

Then I would suggest exposing your cache to the outside world through the IDictionary interface.

If a performance problem arose in this area in future (unlikely, bottlenecks are often not where you expect), you'd just have to change this one piece of code, and the rest of your application would not be affected.

You might be falling into the trap of premature optimization, which is a big productivity and maintainability killer.

If you really want to know which one is faster, set up a test application, and profile the 2 cases with different loads. You'll get a more accurate answer for your specific case than we will be able to give you on Stack Overflow!

Baldrick
  • 11,712
  • 2
  • 31
  • 35