12

I read about memory limit

I have an application which works with huge images which needs to be streamed. Like in a video processing with single frames. The application has about 40 plugins, each of them can contain database, image processing and WPF GUI.

The application also has 2 plugins which uses older DotNet Winforms.

All works well except the application goes over about 1.2GB in RAM. Then on unusual locations in the plugins where new memory is allocated I receive the "Out of Memory exception".

I am working on a 64Bit system compiled as 32Bit. I have no more idea what to do and how to search for any fault.

Is there a limit or can I catch them?

Community
  • 1
  • 1
Nasenbaer
  • 4,810
  • 11
  • 53
  • 86
  • so how many users does it take for you to hit the limit? – Limey Jun 06 '12 at 17:15
  • I don't understand the question. Images are 1500x1500 pixel and all works well. With same code and same procedures if the source image resolution of video stream is 3500x3500 then the application crashes with memory exception. But computer has more than 3GB more RAM available and free. – Nasenbaer Jun 08 '12 at 12:54
  • 1
    I would go for a _memory_ profiler (i.e. not a _performance_ profiler) like [ANTS Memory Profiler](http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/). – Uwe Keim Jun 08 '12 at 12:59
  • Are you hitting that 1.2g limit with 2 users or 2000. With so many plugins (which 40 seems insane to me) the issue may not be your memory limit, but the need to redesign your app to use less plugins (and therefore less memory on the server) – Limey Jun 08 '12 at 13:02
  • @Limey: this is a WPF-tagged question; I would assume it's a single user and not a server-based app. – Dan Puzey Jun 08 '12 at 13:10
  • @Puzey: Wow, so you think its 1.2g on the users local machine? – Limey Jun 08 '12 at 14:13
  • 1
    You can't get around the fact that the OS limits the size of a process to about 2 GB. Even Visual Studio 2010 (which is only released as 32 bit) suffers from this issue. I don't think you are going to be able to up your limit without switching to 64 bit. You have two solutions - find a way to optimize and compress (who needs to stream a 3500x3500 bitmap?) or delegate tasks to new processes (NOT threads, as threads run inside the parent process). – JDB Jun 19 '12 at 16:59
  • Hi Cyborgx. Optimation of code is not an option here. But you confirm, there is a limit of 1.2GB using of RAM? – Nasenbaer Jun 20 '12 at 08:23
  • @Nasenbaer: Are you sure that there arent' any ways that you can redesign your code to minimize the amount of memory that is required at any particular point? What is the specific reason that you need to have so much memory available at one time? Do you need to have all these bitmaps in memory at the same time? – nicodemus13 Jun 20 '12 at 10:19
  • @nicodemus13 The application will use a lot of third party moduls. I need to know if there is a limit. The question at this point is not code optimization. Yes, all moduls have the image collection available in memory (in some cases as copy) to modify them in sequence or parallel. – Nasenbaer Jun 20 '12 at 15:28
  • 1
    While I know I am in good company, increasing the address space (as is common with SQL Server) may just mask the problem until you work on larger files. Can you setup AdPlus and take a memory dump, load it into WinDbg (set up your symbol paths) and !Analyze it. Here is a [WinDbg tutorial](http://www.codeproject.com/Articles/6084/Windows-Debuggers-Part-1-A-WinDbg-Tutorial) and [one of the best blogs on the topic](http://blogs.msdn.com/b/tess/) - I'm suggesting to confirm you are reaching the limits and the fault isn't a result of a memory leak. – Jeremy Thompson Jun 24 '12 at 05:57

3 Answers3

20

It is very difficult to write a 32-bit program that consumes all of the available virtual memory space. You'll hit the wall well below 2 gigabytes, what you run out of first is a chunk of virtual memory that's large enough to fit the requested size. You can only get up to the 2GB limit by making small allocations, small enough to fit in the holes.

That wall hits early in a program that manipulates bitmaps. They can consume a big chunk of VM to store the bitmap pixels and it needs to be a contiguous allocation. They are stored in an array, not a tree. It's an unmanaged memory allocation, typical .NET memory profilers tend to be a bit helpless to show you the problem.

There isn't anything reasonable you can do about address space fragmentation, the notion that consuming all available VM should be possible is just wrong. You can get more breathing space on a 64-bit operating system by running editbin.exe in a post build event and use its /LARGEADDRESSAWARE command line option. That allows the process to use the available 4 gigabytes of VM, an option that's specific to the 64-bit version of Windows and possible because Windows doesn't need the upper 2GB. And of course, changing the platform target to AnyCPU is a quick and easy way to get gobs of virtual memory.

Community
  • 1
  • 1
Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
0

A 32Bit application running on windows (even if the OS is 64Bit) has a 4Gb Address space, but this is split into 2Gb Application/2Gb System (this can be changed to 3/1 with a different startup switch).

It is quite likely that the total memory you are using is actually 2Gb rather than 1.2Gb, how are you determining this 1.2Gb figure, Have you looked at the application using the process explorer tool?

If you change your application to ANYCPU or 64Bit you should find that this limitation disappears (well moves to a massively larger value) on a 64Bit OS.

Bob Vale
  • 18,094
  • 1
  • 42
  • 49
  • 7
    Add some info. 2GB is for unmanaged apps. For managed apps OOM occurs around 1.2-1.5GB roughly because of GC. changing to 64 bit may resolved the problem, but may be impossible if any of the dependencies is 32 bit only. – Lex Li Jun 08 '12 at 13:38
0

To become more deterministic you should write some integration tests to check where your memory does end up. You can do it now with WMemoryProfiler. I would first load the images in 1500x1500 size, clean everything up and then mark all objects as known. Then I would relod the big images and check which new objects were allocated and have a sharp look at how many of them are there and who does own them.

You say that there are many external modules used. Perhaps you should drop some of them due to unwise usage of memory and replace them with something better. Now you can check.

If you are reaching the limit you still can unload some images and load them on demand if you and your plugins do support lazy structures such as IEnumerable<Image> where you as provider can decide when to load an images and how long to keep it in a cache until you get rid of the reference to help freeing up some memory.

[Test]
public void InstanceTracking()
{
   using (var dumper = new MemoryDumper())  // if you have problems use to see the debugger windows true,true))
   {
      TestWith1500x1500();
      dumper.MarkCurrentObjects();
      TestWith3000x3000();
      ILookup<Type, object> newObjects = dumper.GetNewObjects()
                                               .ToLookup( x => x.GetType() );

      // here we do find out which objects are holding most of the memory
      MemoryStatistics statOld = dumper.GetMemoryStatistics();
      foreach (var typeInfo in statOld.ManagedHeapStats
                                   .OrderByDescending(x => x.Value.Count))
      {
            Console.WriteLine("Type {0} has {1} instances of total size {2:N0} bytes", 
                           typeInfo.Key, 
                           typeInfo.Value.Count,
                           typeInfo.Value.TotalSize);
      }

      // then check with the info from above who is holding the most interesting new objects. 
      Console.WriteLine("New Strings:"); // just an example perhaps you should have a look at the images.
      foreach (var newStr in newObjects[typeof(string)] )
      {
          Console.WriteLine("Str: {0}", newStr);
      }
   }
}
Alois Kraus
  • 13,229
  • 1
  • 38
  • 64