Due to seemingly-premature Out of Memory exceptions, we have been examining closely the memory usage of various .NET constructs... particularly large objects that tend to fragment the Large Object Heap, causing premature Out of Memory exceptions. One area that has been a bit surprising is the .NET Image classes: Bitmap and Metafile.
Here's what we think we have learned, but have been unable to find MS documentation to verify, so we would appreciate any confirmation others can give:
(1) When you create a Bitmap object from a compressed raster file (JPG, PNG, GIF, etc), it consumes memory for a fully uncompressed pixel array, at the full resolution of that file. So, for example, a 5MB JPG that is 9000x3000 pixels would be expanded into 9000x3000x3 bytes (assuming 24bit color, no alpha), or 81MB of memory consumed. Correct?
(1a) There's some evidence (see 2b below) that it ALSO stores the original compressed format... so, actually 86MB in this case. But that's unclear... does anyone know?
(2) When you create a Metafile object and then draw a raster file (JPG, PNG, GIF, etc) into it, it only consumes memory for the compressed file. So, if you draw a 5MB JPG that is 9000x3000 pixels into a Metafile, it will only consume roughly 5MB of memory. Correct?
(2a) To draw a raster file into a Metafile object, the only way seems to be to load a Bitmap with the file and then draw the Bitmap into the Metafile. Is there a better way that doesn't involve temporarily loading that huge Bitmap data (and causing the associated memory fragmentation)?
(2b) When you draw a Bitmap into a Metafile, it uses a compressed format of size similar to the original compressed file. Does it do that by storing the original compressed file in the Bitmap? Or does it do it by re-compressing the expanded Bitmap using the original compression settings?
(3) We originally assumed that large (>85KB) Image objects would be placed in the Large Object Heap. In fact, that seems to NOT be the case. Rather, each Bitmap and each Metafile is a 24-byte object in the Small Object Heap that refers to a block of Native Memory that contains the real data. Correct?
(3a) We assume such Native Memory is like Large Object Heap in that it cannot be compacted... once the big object is laid into Native Memory, it will never be moved, and thus fragmentation of Native Memory can cause as many problems as fragmentation of Large Object Heap. True? Or is there special handling of the underlying Bitmap / Metafile data that is more efficient?
(3b) So, there seems to be four independent blocks of memory that are managed separately, and running out of each can result in the same Out of Memory exceptions: Small Object Heap (managed objects < 85KB, compacted by the GC), Large Object Heap (managed objects > 85KB that are collected by GC, but not compacted), Native Memory (unmanaged objects, presumably not compacted), and Desktop Heap (where windows handles and such limited resources are managed). Have I documented those four properly? Are there others we should be aware of?
Any clarity that anybody can provide on the above would be greatly appreciated. If there is a good book or article that fully explains the above, please let me know. (I am happy to do the required reading; but the vast majority of books don't get that deep, and thus don't tell me anything I don't already know.)
Thanks!