1

I've been using LodePNG's lodepng_encode24_file to encode some 24-bit RGB image files, and it has worked wonderfully so far. However, I noticed that it appears to crash when I feed it a dataset larger than 15360*15360 pixels in size (14336*14336 pixel images get encoded fine).

A minimal example of this behavior for the 32-bit case (where the maximum size before crashing is slightly lower) can be obtained by simply replacing the line

  unsigned width = 512, height = 512;

with

  unsigned width = 1024*14, height = 1024*14;

in LodePNG's example_encode.c file, and executing it.

I previously had issues with C code crashing because I was allocating large arrays to stack memory (whose maximum size is generally somewhere around 2MB) instead of heap memory, so as a new user of C, my first instinct was to see if there is an upper limit on heap memory size.

However, according to this answer, there is no limit on heap memory, so something else must be going wrong.

My second guess was that the crash was due to an inherent limitation on the maximum image dimensions supported by the PNG format itself. However, according to this answer and the comment below it, the maximum file size supported by PNG is on the order of 4,000,000,000 * 4,000,000,000 pixels, so this is also not the culprit.

Does anyone have a guess as to what might be going wrong? Is anyone else able to reproduce this error when they try it?

EDIT: As far as RAM consumption is concerned, I have 8GB RAM, and subtracting the hardware-reserved, in-use, modified and standby memory (terms used by Windows Resource Monitor utility) I have about 4GB RAM free when doing the computation. For a 32-bit image of 15000*15000 size, less than 1GB would be needed. Likewise, when I successfully encode 14000*14000 24-bit images, my free RAM never drops below 3GB at any point of the encoding process, so I don't think RAM running out is the problem.

Community
  • 1
  • 1
DumpsterDoofus
  • 1,132
  • 12
  • 30
  • Of course there's a limit to heap memory, several in fact. First there's the limit of what your system can handle, like for example a 32-bit system can only address up to 4GB. Then there the actual amount of physical and swap space available to the system, less the memory needed for the operating system, and all other processes running on your system. And finally, there's a limit in that when you allocate memory, all of it needs to be one big contiguous block, and if no such big block is available then the allocation fails. – Some programmer dude Aug 25 '14 at 16:35
  • Depending on how you generate your image, you might want to encode it line per line - so that you only need to have one row in memory. PNG format is straightforward for this (leaving aside interlaced PNG) and libpng allows that mode or writing (and reading), see eg [png_write_row()](http://refspecs.linuxfoundation.org/LSB_3.1.1/LSB-Desktop-generic/LSB-Desktop-generic/libpng12.png.write.row.1.html). (I do the same in my Java lib [PNGJ](https://code.google.com/p/pngj/)). I don't know about LodePNG. – leonbloy Aug 25 '14 at 16:50
  • @JoachimPileborg: Thanks for the reply. In the case of my system, I have 8GB RAM, and subtracting the hardware-reserved, in-use, modified and standby memory (terms used by Windows Resource Monitor utility) I have about 4GB RAM free when doing the computation. For a 32-bit image of 15000*15000 size, less than 1GB would be needed. Likewise, when I successfully encode 14000*14000 24-bit images, my free RAM never drops below 3GB at any point of the encoding process, so I don't think RAM running out is the problem. I don't know anything about the contiguity of my RAM, though. – DumpsterDoofus Aug 25 '14 at 16:54
  • Couldn't it just be a bug in the code provided by LodePNG? – alk Aug 25 '14 at 17:01
  • 1
    Seems like a memory fragmentation issue, or LodePNG implementation limitation/bug. Just tried saving 16384x16384 24-bit image using libpng, works fine. – user2802841 Aug 25 '14 at 17:06
  • @alk: When you say "code provided by LodePNG", do you mean the `example_encode.c` file, or do you mean LodePNG itself? The `example_encode.c` file executes properly for all image sizes below around 13312*13312, and for me it crashes around the 14000*14000 mark and above, so I think the issue isn't with the example file, but rather with LodePNG itself, or with something going wrong with my computer. – DumpsterDoofus Aug 25 '14 at 17:08
  • @user2802841: LOL, I guess my computer just sucks, haha :) Thanks for trying to verify it and share your results. – DumpsterDoofus Aug 25 '14 at 17:10
  • @leonbloy: Yeah, if I can't figure out why this is crashing then I'll probably just do what you say and break the encoding into smaller, more manageable pieces. – DumpsterDoofus Aug 25 '14 at 17:12
  • I meant LodePNG itself. – alk Aug 25 '14 at 17:15
  • 1
    @user2802841: You did not use LodePNG, did you? – alk Aug 25 '14 at 17:19
  • In which does the code crash when you run your test? – alk Aug 25 '14 at 17:21
  • @alk That is correct, I tried using reference library [libpng](http://www.libpng.org/pub/png/), doesn't seem to have any trouble writing images even well over 1GB in size. In OPs case I would try running the code under debugger to see where exactly it fails. – user2802841 Aug 25 '14 at 17:32
  • 2
    I strongly assume the observed behaviour is bug in LodePNG. I just compiled the current version, created the example program as per the OP and run it. It did crash. Runnng it under Valgrind revealed memory issues inside the LodePNG code. – alk Aug 25 '14 at 17:36
  • The PNG size limits are 2Gx2G, not 4Gx4G, but still plenty larger than your images. – Glenn Randers-Pehrson Aug 25 '14 at 21:48

1 Answers1

5

I believe you are underestimating how much memory your program uses. Assuming you are on Windows then you probably only have 2 GB of memory available for a process to use (see here). You then allocate a big 882 MB chunk for your 14336x14336 image. The LodePNG code then does more allocations most likely equal to or greater than this image size.

I only traced the LodePNG code manually but it seems to create a buffer in memory and write to that buffer in chunks. It does a minimum resize when reallocating (in lodepng_chunk_append(), only enough to hold the data plus 12 bytes) which means it will have to do a lot of reallocations. This may (or eventually will) fragment the memory to a point where a very large buffer is not available.

Even if the heap memory is not fragmented think about what might happen when you try to realloc() a 800MB buffer to a 801MB one. It might work if the heap manager is "smart" but a naive one will require a total of 1601MB...which is not available if your heap is 2000MB in size and you've already used 822MB of it.

A lot (most) of this is conjecture but you can do some tests yourself to simulate allocating and reallocating several large chunks of memory to see if you can reproduce the out of memory situation.

Appendum: While the above may be true it is actually not the cause of the crash in this case. From actually running and tracing the code the issue is line 5558 in LodePng.c:

size_t size = (w * h * lodepng_get_bpp(&info.color) + 7) / 8;

This calculation results in an integer overflow when w * h is greater than 178,956,970 when lodepng_get_bpp() returns 24. For a square image this is 13378 x 13378. This causes the output buffer to be allocated the incorrect size and thus a buffer overflow when it is later written to.

A quick fix would be to change this line to:

 size_t size = (size_t) ((unsigned long long) w * h * lodepng_get_bpp(&info.color) / 8 + 1);

although I'm not sure this will work correctly for all BPP values and it still has an overflow issue albeit at a larger size. I would suggest contacting the author of LodePNG to implement a proper fix.

Community
  • 1
  • 1
uesp
  • 6,194
  • 20
  • 15
  • Yikes - introduce a float? That has far less precision than an int. A better move would be changing the parentheses: `((lodepng_get_bpp(&info.color) + 7) / 8)`, although this is only a partial fix, as it will not allow `MAX_INT * MAX_INT` images with more than a 1-byte color spec. (It may be beyond what the author of LodePng imagined for typical use.) – Jongware Aug 25 '14 at 18:29
  • 1
    Nonono `float`, at least for a 64bit system using `long` would do the job: `size_t size = (w * h * (long) lodepng_get_bpp(&info.color) + 7) / 8;` – alk Aug 25 '14 at 18:32
  • 1
    @Jongware: Your proposal is not an exact replacment for the original code. – alk Aug 25 '14 at 18:35
  • Using float does feel a bit "wrong". Casting up to a 64-bit integer and then back down to 32-bit would be ideal. – uesp Aug 25 '14 at 19:18
  • @alk: right-o. I was assuming `bpp` could safely be downscaled, but PNG allows *lower* bit depths than as well. The maximum "raw" size can be 2³¹ * 2³¹ ([earlier SO question on the same](http://stackoverflow.com/questions/4109447/file-format-limits-in-pixel-size-for-png-images)), with the bit depth up to a staggering 64-bit RGBA. ...Well beyond my system's current capabilities. – Jongware Aug 25 '14 at 22:05