41

I have an application that is used in image processing, and I find myself typically allocating arrays in the 4000x4000 ushort size, as well as the occasional float and the like. Currently, the .NET framework tends to crash in this app apparently randomly, almost always with an out of memory error. 32mb is not a huge declaration, but if .NET is fragmenting memory, then it's very possible that such large continuous allocations aren't behaving as expected.

Is there a way to tell the garbage collector to be more aggressive, or to defrag memory (if that's the problem)? I realize that there's the GC.Collect and GC.WaitForPendingFinalizers calls, and I've sprinkled them pretty liberally through my code, but I'm still getting the errors. It may be because I'm calling dll routines that use native code a lot, but I'm not sure. I've gone over that C++ code, and make sure that any memory I declare I delete, but still I get these C# crashes, so I'm pretty sure it's not there. I wonder if the C++ calls could be interfering with the GC, making it leave behind memory because it once interacted with a native call-- is that possible? If so, can I turn that functionality off?

EDIT: Here is some very specific code that will cause the crash. According to this SO question, I do not need to be disposing of the BitmapSource objects here. Here is the naive version, no GC.Collects in it. It generally crashes on iteration 4 to 10 of the undo procedure. This code replaces the constructor in a blank WPF project, since I'm using WPF. I do the wackiness with the bitmapsource because of the limitations I explained in my answer to @dthorpe below as well as the requirements listed in this SO question.

public partial class Window1 : Window {
    public Window1() {
        InitializeComponent();
        //Attempts to create an OOM crash
        //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops
        int theRows = 4000, currRows;
        int theColumns = 4000, currCols;
        int theMaxChange = 30;
        int i;
        List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack
        byte[] displayBuffer = null;//the buffer used as a bitmap source
        BitmapSource theSource = null;
        for (i = 0; i < theMaxChange; i++) {
            currRows = theRows - i;
            currCols = theColumns - i;
            theList.Add(new ushort[(theRows - i) * (theColumns - i)]);
            displayBuffer = new byte[theList[i].Length];
            theSource = BitmapSource.Create(currCols, currRows,
                    96, 96, PixelFormats.Gray8, null, displayBuffer,
                    (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
            System.Console.WriteLine("Got to change " + i.ToString());
            System.Threading.Thread.Sleep(100);
        }
        //should get here.  If not, then theMaxChange is too large.
        //Now, go back up the undo stack.
        for (i = theMaxChange - 1; i >= 0; i--) {
            displayBuffer = new byte[theList[i].Length];
            theSource = BitmapSource.Create((theColumns - i), (theRows - i),
                    96, 96, PixelFormats.Gray8, null, displayBuffer,
                    ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
            System.Console.WriteLine("Got to undo change " + i.ToString());
            System.Threading.Thread.Sleep(100);
        }
    }
}

Now, if I'm explicit in calling the garbage collector, I have to wrap the entire code in an outer loop to cause the OOM crash. For me, this tends to happen around x = 50 or so:

public partial class Window1 : Window {
    public Window1() {
        InitializeComponent();
        //Attempts to create an OOM crash
        //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops
        for (int x = 0; x < 1000; x++){
            int theRows = 4000, currRows;
            int theColumns = 4000, currCols;
            int theMaxChange = 30;
            int i;
            List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack
            byte[] displayBuffer = null;//the buffer used as a bitmap source
            BitmapSource theSource = null;
            for (i = 0; i < theMaxChange; i++) {
                currRows = theRows - i;
                currCols = theColumns - i;
                theList.Add(new ushort[(theRows - i) * (theColumns - i)]);
                displayBuffer = new byte[theList[i].Length];
                theSource = BitmapSource.Create(currCols, currRows,
                        96, 96, PixelFormats.Gray8, null, displayBuffer,
                        (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
            }
            //should get here.  If not, then theMaxChange is too large.
            //Now, go back up the undo stack.
            for (i = theMaxChange - 1; i >= 0; i--) {
                displayBuffer = new byte[theList[i].Length];
                theSource = BitmapSource.Create((theColumns - i), (theRows - i),
                        96, 96, PixelFormats.Gray8, null, displayBuffer,
                        ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
                GC.WaitForPendingFinalizers();//force gc to collect, because we're in scenario 2, lots of large random changes
                GC.Collect();
            }
            System.Console.WriteLine("Got to changelist " + x.ToString());
            System.Threading.Thread.Sleep(100);
        }
    }
}

If I'm mishandling memory in either scenario, if there's something I should spot with a profiler, let me know. That's a pretty simple routine there.

Unfortunately, it looks like @Kevin's answer is right-- this is a bug in .NET and how .NET handles objects larger than 85k. This situation strikes me as exceedingly strange; could Powerpoint be rewritten in .NET with this kind of limitation, or any of the other Office suite applications? 85k does not seem to me to be a whole lot of space, and I'd also think that any program that uses so-called 'large' allocations frequently would become unstable within a matter of days to weeks when using .NET.

EDIT: It looks like Kevin is right, this is a limitation of .NET's GC. For those who don't want to follow the entire thread, .NET has four GC heaps: gen0, gen1, gen2, and LOH (Large Object Heap). Everything that's 85k or smaller goes on one of the first three heaps, depending on creation time (moved from gen0 to gen1 to gen2, etc). Objects larger than 85k get placed on the LOH. The LOH is never compacted, so eventually, allocations of the type I'm doing will eventually cause an OOM error as objects get scattered about that memory space. We've found that moving to .NET 4.0 does help the problem somewhat, delaying the exception, but not preventing it. To be honest, this feels a bit like the 640k barrier-- 85k ought to be enough for any user application (to paraphrase this video of a discussion of the GC in .NET). For the record, Java does not exhibit this behavior with its GC.

Community
  • 1
  • 1
mmr
  • 14,781
  • 29
  • 95
  • 145
  • 1
    Could you perhaps create a new data structure that didn't allocate so massive continous data structures? I realize this will add some overhead. – Lasse V. Karlsen May 18 '10 at 20:34
  • 5
    It may well be that memory is being held because there are live references to it, in which case GC.Collect does nothing. – Steven Sudit May 18 '10 at 20:36
  • 32mb is not a massive allocation. If that's considered massive by .NET, it's entirely possible that I'm using the wrong platform. But C++ has handled it fine in previous apps that I've used; I just want the flexibility of C# and .NET for coding speed. – mmr May 18 '10 at 20:38
  • 1
    How many of these `ushort[,]` do you have loaded in memory at a time? I was able to load 46 into memory before my 32bit app threw an `OutOfMemoryException`. – Matthew Whited May 18 '10 at 20:51
  • 1
    @Lasse V. Karlsen-- according to the links @Kevin posted, the border between 'small' and 'large' in .NET is 85K. Allocating a 32 mb image in 85k chunks sounds like a nightmare to me. – mmr May 18 '10 at 20:51
  • @Matthew Whited-- it varies from runs. The application does other things than just load and delete images, including allocating objects to describe the operations that will be performed and maintaining an undo stack. It's not just the allocation, it's the allocation and deallocation. Check @Kevin's links; the article has code posted that looks exactly like the behavior I'm seeing. – mmr May 18 '10 at 20:53
  • Are you keeping the undo stack in memory on do you have a fall back to disk? – Matthew Whited May 18 '10 at 21:07
  • @Matthew Whited-- it's in memory now. – mmr May 18 '10 at 21:09
  • 1
    @mmr That might be, but you're probably going to have to choose, a working application or not. Not that what I suggested is the only way (that is, I don't know of any other, but that isn't to say there is none), but there are known problems with the large object heap and fragmentation, exactly like you're experiencing, and the general solution is to avoid fragmenting the heap, ie. not using it. If you allocated an array of 4000 arrays each containing 4000 elements, each array would be less than the LOB size, right? – Lasse V. Karlsen May 18 '10 at 21:46
  • @Lasse V. Karlsen-- absolutely true. I'm not sure if the actual loading of the image itself from disk will cause a problem, since that loading is done into a large array. Not only that, the native code also takes large, contiguous arrays. Basically, changing the data format that drastically is basically a full rewrite of the app. I'm really hoping to avoid it; if I can't and I have to do a rewrite, I'll just not use .NET. This issue, coupled with the inability to use 16 bit images, means that .NET is not robust enough for (my interpretation of) medical imaging. – mmr May 18 '10 at 21:51
  • You could always switch over to F# for relevant libraries... when it was introduced by MVPs to me at a recent symposium, it was presented in the light of F# being designed from the ground up by Microsoft to be aggressive at garbage-collection. – Hardryv May 18 '10 at 23:05
  • 1
    @Hardryv-- forgive my ignorance, but isn't F# part of the CLR? If the CLR's GC is not handling objects greater than 85k in a way that I'm expecting, then how would moving to another CLR language solve that? – mmr May 19 '10 at 01:25
  • What's the purpose of the Sleep() calls in the first example? – dthorpe May 19 '10 at 17:28
  • @dthorpe-- when I was coding it, I wanted to watch as the allocations and deallocations happened in the output window of the debugger. They aren't necessary, and removing them should not affect the behavior at all. But there is usually a time lag of up to at least several seconds if not minutes between someone switching to different images, so they shouldn't cause a problem either. – mmr May 19 '10 at 17:35
  • @mmr -- forgive plz- I'd not read through but merely responded to "How do I get C# to garbage collect aggressively?" -- having digested much more of the article now my guess (and it's all I can offer) is that there's still a memory leak in the unmanaged native code -- likely there's no easy way to verify that other than to re-code the pertinent functions with managed source and see if you get different results when putting it through paces (all of which may not even be feasible) -- we had a similar issue at Northrop once in the way back and our unmanaged C++ lib was indeed the culprit. – Hardryv May 19 '10 at 18:53
  • @Harddryv-- I've just posted source that causes the bug without using any calls to native code. It looks like the native calls were a red herring here, one which I regret because it seems to have thrown many people off (myself included). – mmr May 19 '10 at 18:57
  • Why in the world are you storing entire 32mb images in the undo stack!? **That's** your real issue – BlueRaja - Danny Pflughoeft May 24 '10 at 17:49
  • 1
    @BlueRaja-- it doesn't matter if they're on the undo stack or not. The fact of the matter is, declaring objects larger than 85k causes the LOH to fragment. That fragmentation means that such a program will become unstable because of memory problems if left to run long enough. My application sees it quickly because I use such large allocations; if you follow the links in @Kevin's response, you'll see that plenty of other people hit this ceiling using more modest memory amounts. – mmr May 24 '10 at 17:56
  • Actually, the same problem exists in C++. It's the non-movability of the heap that causes the problem. The solution used by e.g. `malloc` is that all allocations are done to the closest power of two - so instead of allocating 30 MiB, you allocate *exactly* 32 MiB. This means that even though the heap gets fragmented over time, it never wastes more than half of the used memory. This approach works just as fine in .NET, though I've found it easier to use native memory for allocation patterns like this anyway. Also, `GC.Collect` can compact the LOH now, yay! :) – Luaan Nov 10 '16 at 10:05

12 Answers12

22

Here are some articles detailing problems with the Large Object Heap. It sounds like what you might be running into.

http://connect.microsoft.com/VisualStudio/feedback/details/521147/large-object-heap-fragmentation-causes-outofmemoryexception

Dangers of the large object heap:
http://www.simple-talk.com/dotnet/.net-framework/the-dangers-of-the-large-object-heap/

Here is a link on how to collect data on the Large Object Heap (LOH):
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx

According to this, it seems there is no way to compact the LOH. I can't find anything newer that explicitly says how to do it, and so it seems that it hasn't changed in the 2.0 runtime:
http://blogs.msdn.com/maoni/archive/2006/04/18/large-object-heap.aspx

The simple way of handling the issue is to make small objects if at all possible. Your other option to is to create only a few large objects and reuse them over and over. Not an idea situation, but it might be better than re-writing the object structure. Since you did say that the created objects (arrays) are of different sizes, it might be difficult, but it could keep the application from crashing.

kemiller2002
  • 113,795
  • 27
  • 197
  • 251
  • 1
    It's thing is to crash. So, this is a wrong answer. Edit to your edit: In theory, it allows for 2 gb allocations. In reality, not even close. – mmr May 18 '10 at 20:34
  • @mmr: Kevin did say that it wasn't a great idea, so let's be a bit kinder. For that matter, it doesn't cause a crash. – Steven Sudit May 18 '10 at 20:36
  • 1
    @Steven Sudit-- it absolutely causes a crash, with an out of memory exception. This happens between 3 and 30 times an image processing call is made. That's why I think that there's a fragmentation issue happening or the like. I realize I was a bit harsh before, but his answer is still wrong. The .NET gospel seems to be to just let the GC work, but in this case, it doesn't. I've been in previous discussions (http://stackoverflow.com/questions/2714811/is-there-a-common-practice-how-to-make-freeing-memory-for-garbage-collector-easie/2714841#2714841) where people don't read all the caveats. – mmr May 18 '10 at 20:40
  • @Kevin-- I've removed my downvote. Those are useful links, reading them now. – mmr May 18 '10 at 20:43
  • @mmr sorry about that, I posted while I kept researching the problem without thinking it through. – kemiller2002 May 18 '10 at 20:44
  • @mmr: Does GC.Collect cause a crash or do you crash from running out of memory? – Steven Sudit May 18 '10 at 20:51
  • You can also use your own pooling strategy so that the lack of compaction isn't a fatal problem. – Mark Simpson May 18 '10 at 20:54
  • @Steven Sudit-- crash from running out of memory. I'm currently hesitant to say that this is the right answer, although I'm sadly beginning to think that it is. If so, I will have to pursue a very... interesting strategy for memory management. No two images are the same size, so @Mark Simpson's strategy will be probably the right one, but a non-trivial implementation. – mmr May 18 '10 at 20:56
  • @mmr: It's not crazy to use a single array and ignore the parts that "hang off" to either side. Having said that, I suspect that the right answer is to figure out why it's apparently leaking. If you take a look at http://connect.microsoft.com/VisualStudio/feedback/details/521147/large-object-heap-fragmentation-causes-outofmemoryexception, it says that .NET 4.0's runtime fixes the LHO issue, so that's another option for you. – Steven Sudit May 18 '10 at 20:59
  • The first thing you should do is bust out the CLR profiler and check the state of your LOH. I'd bet the farm that it's got huge holes in it. – Mark Simpson May 18 '10 at 21:14
  • @Mark Simpson-- I'd bet that that's true. The question is, is there a way to force the LOH to compress? Because, from all these links, it doesn't look like it. – mmr May 18 '10 at 21:18
  • I don't think there is any way to compact it if you're using 3.5 or earlier. – Mark Simpson May 18 '10 at 21:21
  • I can't find any way of doing so, and I remember a presentation I went to a while back where they had a similar problem. He had to result to using the same large object over again. Clearing out the data and re-populating it. – kemiller2002 May 18 '10 at 22:45
22

Start by narrowing down where the problem lies. If you have a native memory leak, poking the GC is not going to do anything for you.

Run up perfmon and look at the .NET heap size and Private Bytes counters. If the heap size remains fairly constant but private bytes is growing then you've got a native code issue and you'll need to break out the C++ tools to debug it.

Assuming the problem is with the .NET heap you should run a profiler against the code like Redgate's Ant profiler or JetBrain's DotTrace. This will tell you which objects are taking up the space and not being collected quickly. You can also use WinDbg with SOS for this but it's a fiddly interface (powerful though).

Once you've found the offending items it should be more obvious how to deal with them. Some of the sort of things that cause problems are static fields referencing objects, event handlers not being unregistered, objects living long enough to get into Gen2 but then dying shortly after, etc etc. Without a profile of the memory heap you won't be able to pinpoint the answer.

Whatever you do though, "liberally sprinkling" GC.Collect calls is almost always the wrong way to try and solve the problem.

There is an outside chance that switching to the server version of the GC would improve things (just a property in the config file) - the default workstation version is geared towards keeping a UI responsive so will effectively give up with large, long running colections.

Paolo
  • 22,188
  • 6
  • 42
  • 49
  • 31
    +1 -- ""liberally sprinkling" `GC.Collect()` calls is almost always the wrong way to try and solve the problem." – Nate May 18 '10 at 20:51
  • +1 Excellent answer. Perfmon should be the start for finding if managed/unmanaged is the problem child. – Chris O May 18 '10 at 21:20
  • @Nate Bross, @Paolo-- see my answer to @dthorpe below. It turns out that because of the way .NET handles 16 bit images (it doesn't), I need to use GC.Collect explicitly there. That's what lead me to the GC.Collect 'liberal sprinkling'. It was just a hope that it would solve the larger problem. – mmr May 18 '10 at 21:38
4

Use Process Explorer (from Sysinternals) to see what the Large Object Heap for your application is. Your best bet is going to be making your arrays smaller but having more of them. If you can avoid allocating your objects on the LOH then you won't get the OutOfMemoryExceptions and you won't have to call GC.Collect manually either.

The LOH doesn't get compacted and only allocates new objects at the end of it, meaning that you can run out of space quite quickly.

Matthew Steeples
  • 7,858
  • 4
  • 34
  • 49
3

If you're allocating a large amount of memory in an unmanaged library (i.e. memory that the GC isn't aware of), then you can make the GC aware of it with the GC.AddMemoryPressure method.

Of course this depends somewhat on what the unmanaged code is doing. You haven't specifically stated that it's allocating memory, but I get the impression that it is. If so, then this is exactly what that method was designed for. Then again, if the unmanaged library is allocating a lot of memory then it's also possible that it's fragmenting the memory, which is completely beyond the GC's control even with AddMemoryPressure. Hopefully that's not the case; if it is, you'll probably have to refactor the library or change the way in which it's used.

P.S. Don't forget to call GC.RemoveMemoryPressure when you finally free the unmanaged memory.

(P.P.S. Some of the other answers are probably right, this is a lot more likely to simply be a memory leak in your code; especially if it's image processing, I'd wager that you're not correctly disposing of your IDIsposable instances. But just in case those answers don't lead you anywhere, this is another route you could take.)

Aaronaught
  • 120,909
  • 25
  • 266
  • 342
  • @Argonaught-- thanks for the links, looking at them now. Unfortunately, I cannot use the .NET native image libraries, because they, by design, do not allow for the use of 16 bit images, and that's pretty much all that's used in medical imaging. I'm forced to use outside libraries to load images into memory. While it would be nice to look at IDisposable, the fact that I'm using WPF and the fact that I can't use the Image class except to load the 16 bit image compressed to 8 bits for viewing means that the problem cannot be there. – mmr May 18 '10 at 21:24
  • I should also add, the native libraries do not return any of the memory that they allocate; they create and then destroy the memory, typically as large std::vector instances. – mmr May 18 '10 at 21:25
  • @mmr: That's fine, in fact it's especially important if the unmanaged code is creating massive `std::vector` instances. You `AddMemoryPressure` when the `std::vector` instances are (expected to be) created, and `RemoveMemoryPressure` when they are (expected to be) destroyed. Although, again, this will only get you so far if the unmanaged library is actually causing fragmentation; the net result is just that the GC will collect sooner ("more aggressively", as you put it). – Aaronaught May 18 '10 at 21:35
  • @Argonaught-- Currently, when I call a routine, the memory is allocated and then deallocated before the routine returns. The library is purely procedural, no objects are exposed to the outside nor is any memory persisted from one call to the next. Would this approach still help? Those man pages don't have that kind of detail. Is that vector allocation still on the .NET heap? I suppose it would have to be... – mmr May 18 '10 at 21:46
  • @mmr: Maybe it won't help, then. Unmanaged memory will get allocated on the unmanaged heap; managed memory gets allocated on the managed heap. The memory pressure methods are mainly for memory that persists in some way. If the unmanaged methods never hold on to any memory for longer than a single method call, then in all likelihood you have a leak somewhere, or you're experiencing the specific bug in @Kevin's link. – Aaronaught May 18 '10 at 21:56
2

Just an aside: The .NET garbage collector performs a "quick" GC when a function returns to its caller. This will dispose the local vars declared in the function.

If you structure your code such that you have one large function that allocates large blocks over and over in a loop, assigning each new block to the same local var, the GC may not kick in to reclaim the unreferenced blocks for some time.

If on the other hand, you structure your code such that you have an outer function with a loop that calls an inner function, and the memory is allocated and assigned to a local var in that inner function, the GC should kick in immediately when the inner function returns to the caller and reclaim the large memory block that was just allocated, because it's a local var in a function that is returning.

Avoid the temptation to mess with GC.Collect explicitly.

akjoshi
  • 15,374
  • 13
  • 103
  • 121
dthorpe
  • 35,318
  • 5
  • 75
  • 119
  • I've definitely hit your first scenario. It works like this: 1) .NET will not handle 16 bit images natively. I have to use another library, but it will not display properly. 2) To display these images, I have to allocate an 8 bit image with the same number of elements with the dynamic range set per a user moving a box around on the screen. The DR is then set by the min/max of that box and a lookup table. That 8 bit image is allocated once per image. 3) Moving the box around on the image causes a crash after a second. For this, I must call GC.Collect explicitly; that solved it. – mmr May 18 '10 at 21:34
  • Why are you using an 8 bit image? 8 bits per pixel, or 8 bits per color channel? 8 bits per pixel will force/require use of a color palette which is tedious and time consuming and will allocate additional memory (for the color table) behind the scenes. – dthorpe May 18 '10 at 21:44
  • medical images are black and white. All color medical images are false colored (with the exception of things like optometry or dermatology) because the signals that are detected are beyond visible range. As such, I only show black and white. The remapping to 8 bit space from 16 requires that I have a byte array of the same dimensions as the ushort, and that that byte array get refilled with a native routine every time the user changes the mapping. – mmr May 19 '10 at 01:13
  • @mmr: Ah, 16 bits per color channel, 1 color channel. Got it. – dthorpe May 19 '10 at 17:22
2

Apart from handling the allocations in a more GC-friendly way (e.g. reusing arrays etc.), there's a new option now: you can manually cause compaction of the LOH.

GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;

This will cause a LOH compaction the next time a gen-2 collection happens (either on its own, or by your explicit call of GC.Collect).

Do note that not compacting the LOH is usually a good idea - it's just that your scenario is a decent enough case for allowing for manual compaction. The LOH is usually used for huge, long-living objects - like pre-allocated buffers that are reused over time etc.

If your .NET version doesn't support this yet, you can also try to allocate in sizes of powers of two, rather than allocating precisely the amount of memory you need. This is what a lot of native allocators do to ensure memory fragmentation doesn't get impossibly stupid (it basically puts an upper limit on the maximum heap fragmentation). It's annoying, but if you can limit the code that handles this to a small portion of your code, it's a decent workaround.

Do note that you still have to make sure it's actually possible to compact the heap - any pinned memory will prevent compaction in the heap it lives in.

Another useful option is to use paging - never allocating more than, say, 64 kiB of contiguous space on the heap; this means you'll avoid using the LOH entirely. It's not too hard to manage this in a simple "array-wrapper" in your case. The key is to maintain a good balance between performance requirements and reasonable abstraction.

And of course, as a last resort, you can always use unsafe code. This gives you a lot of flexibility in handling memory allocations (though it's a bit more painful than using e.g. C++) - including allowing you to explicitly allocate unmanaged memory, do your work with that and release the memory manually. Again, this only makes sense if you can isolate this code to a small portion of your total codebase - and make sure you've got a safe managed wrapper for the memory, including the appropriate finalizer (to maintain some decent level of memory safety). It's not too hard in C#, though if you find yourself doing this too often, it might be a good idea to use C++/CLI for those parts of the code, and call them from your C# code.

Luaan
  • 62,244
  • 7
  • 97
  • 116
1

Have you tested for memory leaks? I've been using .NET Memory Profiler with quite a bit of success on a project that had a number of very subtle and annoyingly persistent (pun intended) memory leaks.

Just as a sanity check, ensure that you're calling Dispose on any objects that implement IDisposable.

Bob Kaufman
  • 12,864
  • 16
  • 78
  • 107
1

You could implement your own array class which breaks the memory into non-contiguious blocks. Say, have a 64 by 64 array of [64,64] ushort arrays which are allocated and deallocated seperately. Then just map to the right one. Location 66,66 would be at location [2,2] in the array at [1,1].

Then, you should be able to dodge the Large Object Heap.

quillbreaker
  • 6,119
  • 3
  • 29
  • 47
0

The best way to do it is like this article show, it is in spanish, but you sure understand the code. http://www.nerdcoder.com/c-net-forzar-liberacion-de-memoria-de-nuestras-aplicaciones/

Here the code in case link get brock

using System.Runtime.InteropServices; 
....
public class anyname
{ 
....

[DllImport("kernel32.dll", EntryPoint = "SetProcessWorkingSetSize", ExactSpelling = true, CharSet = CharSet.Ansi, SetLastError = true)]

private static extern int SetProcessWorkingSetSize(IntPtr process, int minimumWorkingSetSize, int maximumWorkingSetSize);

public static void alzheimer()
{
GC.Collect();
GC.WaitForPendingFinalizers();
SetProcessWorkingSetSize(System.Diagnostics.Process.GetCurrentProcess().Handle, -1, -1);
} 

....

you call alzheimer() to clean/release memory.

akjoshi
  • 15,374
  • 13
  • 103
  • 121
0

you can use this method:

public static void FlushMemory()
{
    Process prs = Process.GetCurrentProcess();
    prs.MinWorkingSet = (IntPtr)(300000);
}

three way to use this method.

1 - after dispose managed object such as class ,....

2 - create timer with such 2000 intervals.

3 - create thread to call this method.

i suggest to you use this method in thread or timer.

Bryan Boettcher
  • 4,412
  • 1
  • 28
  • 49
  • Ugh, do NOT catch an Exception just to throw an empty base Exception. You have removed the entire stack trace and any kind of exception information (was it a InvalidOperationException, IOException, SocketException?). In this case just don't try-catch it. – Bryan Boettcher Feb 13 '14 at 16:15
0

The problem is most likely due to the number of these large objects you have in memory. Fragmentation would be a more likely issue if they are variable sizes (while it could still be an issue.) You stated in the comments that you are storing an undo stack in memory for the image files. If you move this to Disk you would save yourself tons of application memory space.

Also moving the undo to disk should not cause too much of a negative impact on performance because it's not something you will be using all of the time. (If it does become a bottle neck you can always create a hybrid disk/memory cache system.)

Extended...

If you are truly concerned about the possible impact of performance caused by storing undo data on the file system, you may consider that the virtual memory system has a good chance of paging this data to your virtual page file anyway. If you create your own page file/swap space for these undo files, you will have the advantage of being able to control when and where the disk I/O is called. Don't forget, even though we all wish our computers had infinite resources they are very limited.

1.5GB (useable application memory space) / 32MB (large memory request size) ~= 46

Matthew Whited
  • 22,160
  • 4
  • 52
  • 69
  • it actually is something we use all the time. This application is use to test different processing algorithms; I'll often run an algorithm, undo it, rerun it with different parameters, etc, just to see different results. – mmr May 18 '10 at 21:27
  • You will probably be limited to less than 50 of these `ushort` arrays and their related undo stacks. You could look at a hybrid memory/disk paging system as I have suggested. It should be fairly easily to implement using .Net but sorry I don't have time to draft up an example. – Matthew Whited May 19 '10 at 03:31
0

The GC doesn't take into account the unmanaged heap. If you are creating lots of objects that are merely wrappers in C# to larger unmanaged memory then your memory is being devoured but the GC can't make rational decisions based on this as it only see the managed heap.

You end up in a situation where the GC doesn't think you are short of memory because most of the things on your gen 1 heap are 8 byte references where in actual fact they are like icebergs at sea. Most of the memory is below!

You can make use of these GC calls:

  • System::GC::AddMemoryPressure(sizeOfField);
  • System::GC::RemoveMemoryPressure(sizeOfField);

These methods allow the GC to see the unmanaged memory (if you provide it the right figures).

Stef Geysels
  • 1,023
  • 11
  • 27