1

I have an application that saves/caches many objects in a static field. When there are a lot of objects saved during the lifetime of the application, there is out of memory exception being thrown because the cache grows so large.

Is there any class or object that informs me that the memory is running out and there will be an outofmemoryexception throwing soon so that I can know that I need to free up some memory by removing some of these cached objects? I'm looking for a sign that there is memory pressure in the application so that I can take precautionary action during application runtime before the memory exception is thrown.

iefpw
  • 6,816
  • 15
  • 55
  • 79

2 Answers2

0

If you have so much data cached that you're running out of memory that indicates something seriously wrong with your caching approach. Even as a 32-bit process, you have a 2 GB address space.

You should consider using a limited cache where the user can set a preferred cache size and you automatically remove objects when this size is reached. You can implement certain caching strategies to pick which objects to remove from the cache: oldest objects or the least frequently used objects or a combination of these.

You can also try mapping some of your cache to disk and only keep the most frequently used objects in memory. When an object is needed that's not in memory, you can de-serialize it from disk and put it back in memory. If an object is then unused for a period of time, you can swap it to disk.

xxbbcc
  • 16,930
  • 5
  • 50
  • 83
0

I also have an application that uses as much ram as it possibly can. You have a couple of options here depending on your specific scenario (all have drawbacks).

One of the simplest is to preallocate a circular buffer of classes (or bytes) that you are caching to consume all ram up front. This usually is not preferred because consuming all the RAM on a box when you don't need to is just plain Rude.

Another way to handle large caches (or keep from throwing out of memory exceptions) is to first check that the memory I need exists before allocation. This has the drawback of needing to serialize memory allocations. Otherwise the time between the available RAM check and the allocation you may run out of RAM.

Here is a sample of the best way I have found to do this in .NET:

//Wait for enough memory
var temp = new System.Diagnostics.PerformanceCounter("Memory", "Available MBytes");
long freeMemory = Convert.ToInt64(temp.NextValue()) * (long)1000000;
long neededMemory = (long)bytesToAllocate;
int attempts=1200; //two minutes

while (freeMemory < neededMemory)
{
     //Signal that memory needs to be freed
     Console.WriteLine("Waiting for enough free memory:. Free Memory:" + freeMemory + " Needed Memory(MB):" + neededMemory);
     System.Threading.Thread.Sleep(100);
     freeMemory = Convert.ToInt64(temp.NextValue()) * (long)1000000;
     --attempts;

     if (0 == attempts)
          throw new OutOfMemoryException("Could not get enough free memory. File:" + Path.GetFileName(wavFileURL));
}

//Try and allocate the memory we need.

Furthermore once I am inside the while loop I signal that some memory needs to be freed (depending on your application). NOTE: I tried to simplify the code by using a Sleep statement but ultimately you would want some type of non polling operation if possible.

I am not sure about the specifics of your application but if you are multithreaded, or run many different executables then it may be better to serialize this memory check when allocated memory to the cache. If that is the case I use a Semaphore to make sure that only one thread and/or process can allocate RAM as needed. Similar to this:

Semaphore namedSemaphore = new Semaphore(1, 1, "MemoryAllocationSemaphore"); //named semaphores are cross process
     if (bytesToAllocate > 2000000) //if we need less than 2MB then dont bother locking.
     {
             if (!namedSemaphore.WaitOne((int)TimeSpan.FromMinutes(15).TotalMilliseconds))
             {
                throw new TimeoutException("Waited over 15 minutes for aquiring memory allocation semaphore");
             }
     }

Then a little later on call this:

namedSemaphore.Release();

in a finally block to release the semaphore.

Another option is to use a multiple reader single writer lock. To have multiple threads pulling data from the cache (such as the ReaderWriterLock class). This has the advantage that while you are writing to can check memory pressure and cleanup then add more to the cache, yet still have multiple threads processing data. The drawback is of course the bottle neck of getting data into the cache comes from a single fixed point.

Matt Johnson
  • 1,913
  • 16
  • 23