6

I am trying to figure out how the MemoryCache should be used in order to avoid getting out of memory exceptions. I come from ASP.Net background where the cache manages it's own memory usage so I expect that MemoryCache would do the same. This does not appear to be the case as illustrated in the bellow test program I made:

class Program
{
    static void Main(string[] args)
    {
        var cache = new MemoryCache("Cache");

        for (int i = 0; i < 100000; i++)
        {
            AddToCache(cache, i);
        }


        Console.ReadLine();
    }

    private static void AddToCache(MemoryCache cache, int i)
    {
        var key = "File:" + i;
        var contents = System.IO.File.ReadAllBytes("File.txt");
        var policy = new CacheItemPolicy
        {
            SlidingExpiration = TimeSpan.FromHours(12)
        };

        policy.ChangeMonitors.Add(
                new HostFileChangeMonitor(
                    new[] { Path.GetFullPath("File.txt") }
                    .ToList()));

        cache.Add(key, contents, policy);
        Console.Clear();
        Console.Write(i);
    }        
}

The above throws an out of memory exception after approximately reaching 2GB of memory usage (Any CPU) or after consuming all my machine's physical memory (x64)(16GB).

If I remove the cache.Add bit the program throws no exception. If I include a call to cache.Trim(5) after every cache add I see that it releases some memory and it keeps aproximately 150 objects in the cache at any given time (from cache.GetCount()).

Is calling cache.Trim my program's responsibility? If so when should it be called (like how can my program know that the memory is getting full)? How do you calculate the percentage argument?

Note: I am planning to use the MemoryCache in a long running windows service so it is critical for it to have proper memory management.

Kjartan
  • 18,591
  • 15
  • 71
  • 96
John
  • 634
  • 8
  • 17

1 Answers1

3

MemoryCache has a background thread that periodically estimates how much memory the process is using and how many keys are in the cache. When it thinks you are getting close to the cachememorylimit, it will Trim the cache. Each time this background thread runs, it checks to see how close you are to the limits, and it will increase the polling frequency under memory pressure.

If you add items very quickly, the background thread doesn't have a chance to run, and you can run out of memory before the cache can trim and GC can run (in a x64 process this can result in massive heap size and multi minute GC pauses). The trim process/memory estimation is also known to have bugs under some conditions.

If your program is prone to out of memory due to rapidly loading an excessive number of objects, something with a bounded size like an LRU cache is a much better strategy. LRU typically uses a policy based on item count to evict the least recently used items.

I wrote a thread safe implementation of TLRU (a time aware least recently used policy), that you can easily use as a drop in replacement for ConcurrentDictionary.

It's available on Github here: https://github.com/bitfaster/BitFaster.Caching

Install-Package BitFaster.Caching

Using it would look like something this for your program, and it will not run out of memory (depending on how big your files are):

 class Program
 {
    static void Main(string[] args)
    {
        int capacity = 80;
        TimeSpan timeToLive = TimeSpan.FromMinutes(5);
        var lru = new ConcurrentTLru<int, byte[]>(capacity, timeToLive);

        for (int i = 0; i < 100000; i++)
        {
            var value = lru.GetOrAdd(1, (k) => System.IO.File.ReadAllBytes("File.txt"));
        }


        Console.ReadLine();
    }
 }

If you really want to avoid running out of memory, you should also consider reading the files into a RecyclableMemoryStream, and using the Scoped class in BitFaster to make the cached values thread safe and avoid races on dispose.

Alex Peck
  • 4,603
  • 1
  • 33
  • 37