3

We have an application in C# that controls one of our device and reacts to signal this device gives us.

Basically, the application creates threads, treat operations (access to a database, etc) and communicates with this device.

In the life of the application, it creates objects and release them and so far, we're letting the Garbage Collector taking care of our memory. i've read that it is highly recommanded to let the GC do its stuff without interfering.

Now the problem we're facing is that the process of our application grows ad vitam eternam, growing by step. Example:

enter image description here

It seems to have "waves" when the application is growing and all of a sudden, the application release some memory but seems to leave memory leaks at the same time.

We're trying to investigate the application with some memory profiler but we would like to understand deeply how the garbage Collector works.

Do you guys know another really really deep documentation of GC?

Edit :

Here is a screenshot that illustrates the behavior of the application :

You can clearly see the "wave" effect we're having on a very regular pattern.

Subsidiary question :

I've seen that my GC Collection 2 Heap is quite big and following the same pattern as the total bytes used by my application. I guess it's perfectly normal because most of our objects will survive at least 2 garbage collections (for example Singleton classes, etc)... What do you think ?

Andy M
  • 5,945
  • 7
  • 51
  • 96
  • Let me say this. I've spent weeks tracking down issues related to memory consumption in .NET and I've learned this one thing. If you're building your objects properly and removing references properly memory is collected ___immediately___. It is a myth that it takes cycles for the memory to be collected if you're doing it correctly. __So__, please post a good bit of code for us to understand how you build the objects __and__ what the objects look like as far a references to other objects and vica versa. – Mike Perrenoud Oct 01 '12 at 11:52
  • 1
    How did you measure memory usage? – spender Oct 01 '12 at 12:02
  • I assume that a 'Mo' is a MegaByte ? Anyway, the data you provide is no indication of a memory leak, not even of a problem. – H H Oct 01 '12 at 12:10
  • Mo stands definitely for MegaOctets which means MegaByte in french :) Sorry, I keep making that stupid mistake ! – Andy M Oct 01 '12 at 12:31
  • @spender I checked this simply with the task manager of windows, looking at my process memory size... – Andy M Oct 01 '12 at 12:31
  • @Mike I know that the problem is on our side, I'm not blaming the Garbage Collector at all... In fact, I would like to understand the way it works in order to improve my code... Especially, the article on large objects was really surprising to me and now I know I have to take care about that ! – Andy M Oct 01 '12 at 12:33
  • PerfView (http://blogs.msdn.com/b/vancem/archive/2011/12/28/publication-of-the-perfview-performance-analysis-tool.aspx and http://www.microsoft.com/en-za/download/details.aspx?id=28567) is a great tool for exploring .NET memory management issues. You'll see exactly what's using your memory. – Govert Oct 01 '12 at 13:03

3 Answers3

1

The behavior you describe is typical of problems with objects created on Large Object Heap (LOH). However, your memory consumption seems to return to some lower value later on, so check twice whether it is really a LOH issue.

You are obviously aware of that, but what is not quite obvious is that there is an exception to the size of the objects on LOH.

As described in documentation, objects above 85000 bytes in size end up on LOH. However, for some reason (an 'optimization' probably) arrays of doubles which are longer than 1000 elements also end up there:

double[999] smallArray = ...; // ends up in 'normal', Gen-0 heap
double[1001] bigArray = ...; // ends up in LOH

These arrays can result in fragmented LOH, which requires more memory, until you get an Out of memory exception.

I was bitten by this as we had an app which received some sensor readings as arrays of doubles which resulted in LOH defragmentation since every array slightly differed in length (these were readings of realtime data at various frequencies, sampled by non-realtime process). We solved the issue by implementing our own buffer pool.

Zdeslav Vojkovic
  • 14,391
  • 32
  • 45
  • I've been aware about that large object problem only very lately... We could well have this problem in fact ! I'm at the beginning of my journey debugging this and I want to gather all the informations before to jump to conclusions to quickly ! – Andy M Oct 01 '12 at 12:35
  • I wonder why Microsoft did that? I know that accessing 8-byte-aligned doubles is faster than non-aligned doubles, but that's just as true when they're in an array of 100 as an array of 1,000. Further, because of cache lines, doubles are hardly unique in that regard; objects which fit within cache lines are more efficient than those which straddle them. Given that many programs allocate a lot of small objects, I would think it would have been efficient to have the pointers for "spare" chunks of 12, 20, 28, or 36 bytes, and when allocating an object of one of those sizes... – supercat Nov 02 '12 at 04:09
  • ...check if the appropriate-sized "spare" chunk is available. If so, use it; if not, allocate twice the normal amount of space and set aside a "spare" chunk. For any larger odd-size allocation, round up to the nearest 8 bytes. Since the smallest object that would require rounding would be 44 bytes, memory wastage would be below 10%. Every object larger than 36 bytes--not just double arrays of 1000 or more elements--would be 8-byte aligned. – supercat Nov 02 '12 at 04:11
1

I did some research on a class I was teaching a couple of years back. I don't think the references contain any information regarding the LoH but I thought it was worthwhile to share them either way (see below). Further, I suggest performing a second search for unreleased object references before blaming the garbage collector. Simply implementing a counter in the class finalizer to check that these large objects are being dropped as you believe.

A different solution to this problem, is simply to never deallocate your large objects, but instead reuse them with a pooling strategy. In my hubris I have many times before ended up blaming the GC prematurely for the memory requirements of my application growing over time, however this is more often than not a symptom of faulty implementation.

GC References: http://blogs.msdn.com/b/clyon/archive/2007/03/12/new-in-orcas-part-3-gc-latency-modes.aspx http://msdn.microsoft.com/en-us/library/ee851764.aspx http://blogs.msdn.com/b/ericlippert/archive/2010/09/30/the-truth-about-value-types.aspx http://blogs.msdn.com/b/ericlippert/archive/2009/04/27/the-stack-is-an-implementation-detail.aspx

Eric Lippert's blog is especially interesting, when it comes to understanding anything C# in detail!

Marius Brendmoe
  • 365
  • 1
  • 9
0

Here is an update with some of my investigations :

In our application, we're using a lot of thread to make different tasks. Some of these threads have higher priority.

1) We're using a GC that is concurrent and we tried to switch it back to non-concurrent.

We've seen a dramatic improvment :

  • The Garbage collector is being called much often and it seems that, when called more often, it's releasing much better our memory.

I'll post a screenshot as soon as I have a good one to illustrate this.

We've found a really good article on the MSDN. We also found an interesting question on SO.

With the next Framework 4.5, 4 possibilities will be available for GC configuration.

  1. Workstation - non-concurrent
  2. Workstation - concurrent
  3. Server - non-concurrent
  4. Server - concurrent

We'll try and switch to the "server - non-concurrent" and "serveur - concurrent" to check if it's giving us better performance.

I'll keep this thread updated with our findings.

Community
  • 1
  • 1
Andy M
  • 5,945
  • 7
  • 51
  • 96