50

I am using Red Gates ANTS memory profiler to debug a memory leak. It keeps warning me that:

Memory Fragmentation may be causing .NET to reserver too much free memory.

or

Memory Fragmentation is affecting the size of the largest object that can be allocated

Because I have OCD, this problem must be resolved.

What are some standard coding practices that help avoid memory fragmentation. Can you defragment it through some .NET methods? Would it even help?

Michael Petrotta
  • 59,888
  • 27
  • 145
  • 179
Matt
  • 25,943
  • 66
  • 198
  • 303
  • 6
    It would help to have some information about what kind of app this is. Memory fragmentation would occur if you are leaving memory pinned (or using I/O functions that pin I/O buffers behind the scenes), allocations from native allocators (such as the COM task allocator), or create a lot of large objects, because the LOH doesn't get compacted. The .NET garbage collector already does compact the generational dynamic allocations, which has a side effect of defragmenting free space. If that's not happening, it's because something is preventing objects from being moved. – Ben Voigt Mar 09 '11 at 03:12
  • 33
    *Because I have OCD, this problem must be resolved.* + 1 for this comment alone - I actually like the question though – BrokenGlass Mar 09 '11 at 03:12
  • 9
    Uninstall tools that bitch at you but offer no help to diagnose the problem. Memory fragmentation is a fact of life, there's nothing you can do to prevent it that wouldn't be drastically unpractical. The low-fragmentation heap allocator is already the default for Vista and up. It is only a problem if you allocate more than half of the available address space anyway, pigs don't fly. – Hans Passant Mar 09 '11 at 03:35
  • 2
    @Hans - The low fragmentation heap is not relevant for exclusively managed code though - the managed heap doesn't use the native heap at all. The rest of your comment is totally spot on though. – Stewart Mar 09 '11 at 10:43
  • @Stewart - most fragmentation would be caused by unmanaged code. There's lots of it, even in a pure managed program. The GC causes little fragmentation since it compacts the heap, something unmanaged code cannot do. – Hans Passant Mar 09 '11 at 15:22
  • @Hans Passant: Unfortunately, allocations greater than 85K go on the "large object heap", which Microsoft decided, in its infinite wisdom* can't be compacted at all. (*) I'm aware that for performance reasons it may be best to avoid compacting the Large Object Heap unless absolutely necessary. Not only would compaction require moving large objects, but it would also have to deal with them being out of order, something the smaller heaps don't have to worry about. Nonetheless, even a slow "last ditch" compaction when other allocation attempts fail would be better than a crash. – supercat Mar 18 '11 at 16:02

3 Answers3

11

You know, I somewhat doubt the memory profiler here. The memory management system in .NET actually tries to defragment the heap for you by moving around memory (that's why you need to pin memory for it to be shared with an external DLL).

Large memory allocations taken over longer periods of time is prone to more fragmentation. While small ephemeral (short) memory requests are unlikely to cause fragmentation in .NET.

Here's also something worth thinking about. With the current GC of .NET, memory allocated close in time, is typically spaced close together in space. Which is the opposite of fragmentation. i.e. You should allocate memory the way you intend to access it.

Is it a managed code only or does it contains stuff like P/Invoke, unmanaged memory (Marshal.AllocHGlobal) or stuff like GCHandle.Alloc(obj, GCHandleType.Pinned)?

John Leidegren
  • 59,920
  • 20
  • 131
  • 152
  • 7
    The GC doesn't compact the large object heap, which is where objects > 85KB live. Once the LOH is fragmented, there's no way to defragment it. – Tim Robinson Jan 27 '12 at 13:32
  • As of .NET 4.5.1m there is a way to compact the LOH manually though I would strongly recommend against it for the reason that it is a huge performance hit for your app. https://blogs.msdn.microsoft.com/mariohewardt/2013/06/26/no-more-memory-fragmentation-on-the-net-large-object-heap/ (again, I recommend against it) – Dave Black Aug 09 '17 at 18:19
10

The GC heap treats large object allocations differently. It doesn't compact them, but instead just combines adjacent free blocks (like a traditional unmanaged memory store).

More info here: http://msdn.microsoft.com/en-us/magazine/cc534993.aspx

So the best strategy with very large objects is to allocate them once and then hold on to them and reuse them.

Daniel Earwicker
  • 114,894
  • 38
  • 205
  • 284
  • 2
    Out of curiosity, I wonder why LOH object sizes aren't rounded up to the next multiple of 4096? It would seem like that would facilitate compaction in some OS contexts (simply move virtual page pointers rather than copying memory), and would also greatly reduce fragmentation. Since LOH objects are generally a minimum of 85K, overhead from rounding up to 4K blocks would be 5% or less. – supercat Mar 18 '11 at 16:04
  • @supercat, that comment is worth to be in its own question. If you know the answer by now, please let me know. – mbadawi23 Sep 24 '18 at 01:35
  • @mbadawi23: At least in .NET 2.0, the LOH would get used for some objects that aren't very big. For example, I think any `double[]` over 1,000 elements would get forced into the LOH. Allocating a `double[1024]` as three 4096-byte chunks would be rather wasteful. Of course, my real suspicion is that allocating a `double[1024]` wasn't really a good idea anyway. – supercat Sep 24 '18 at 01:52
9

The .NET Framework 4.5.1, has the ability to explicitly compact the large object heap (LOH) during garbage collection.

GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();

See more info in GCSettings.LargeObjectHeapCompactionMode

Andre Abrantes
  • 456
  • 5
  • 5
  • I would strongly recommend against it for the reason that it is a huge performance hit for your app for 2 reasons: 1. it is time consuming 2. it clears any of the allocation pattern algorithm that the GC has collected over the lifetime of your app. While your app is running, the GC actually tunes itself by learning how your app allocates memory. As such, it becomes more efficient (to a certain point) the longer your app runs. When you execute GC.Collect() (or any overload of it), it clears all of the data the GC has learned - so it must start over. – Dave Black Aug 09 '17 at 18:20
  • 1
    @Dave Black, where you found such info? MSDN doesn't contain info about impact of LOH compacting on allocation pattern algorithm. – 23W Oct 18 '17 at 17:02