4

I'm working on a program which stores an array of roughly 220 mln short values in memory. This block of data is allocated like this:

short * arrayName = new short[SIZE_OF_ARRAY];

And then the contents of a file are read into memory. After a big massive update of the overall architecture of the program by another person in the team, this exact line started to crash the program. The message is this:

Microsoft Visual C++ Runtime Library
Runtime Error!
abnormal program termination

It happens immediately at this call of memory allocation (no further lines, such as a check to see if the pointer is NULL, are executed). Even after a few days, it's unclear to us what change in the other code exactly caused this line to start behaving this way (actually nothing which is even remotely linked to this array was changed).

On Linux (Ubuntu, to be exact), everything is working fine; this problem exists only on Windows machines. On 64-bit Windows OSes, this workaround helps (in .pro file):

QMAKE_LFLAGS_WINDOWS += /LARGEADDRESSAWARE

On 32-bit, it doesn't help.

Replacing the line with malloc the following way has let me check if the pointer is NULL after it (which it is) and to get the error code out of errno, which is 12 (ENOMEM) = "Not enough memory".

short * arrayName = (short *)malloc(SIZE_OF_ARRAY * sizeof(short));

This StackOverflow question seems to be about the same problem; its similarity is even up to the point that allocating a smaller amount of memory works (but 450 MB does not). The answers there suggested high memory fragmentation and that new / malloc cannot allocate a continuous memory region, but in my case, the problem persists even freshly after a reboot when only ~ 600 MB out of 2 physical GBs (and 4 virtual GBs) have been used, so it is somewhat ruled out (additionally, like I mentioned, the exact same line of code had worked before).

My main suspicion is that it has something to do with heap size (although I'm not sure if new and malloc both allocate memory to heap; and also I haven't yet found a way to change heap size in Qt). Am I missing something here?

Community
  • 1
  • 1
Fy Zn
  • 151
  • 7
  • 1
    What compiler are you using? – Kornel Kisielewicz Aug 12 '13 at 01:27
  • Both new and malloc reserve memory on the heap. Your abnormal program terminatione is most likely an uncaught bad_alloc exception. – Greenflow Aug 12 '13 at 01:33
  • @KornelKisielewicz, MSVC2010. – Fy Zn Aug 12 '13 at 01:33
  • Have you tried doing a bisect on the tree to see which change introduced this error? – greatwolf Aug 12 '13 at 01:34
  • @greatwolf, yes, like I said, the exact commit has been located, but it involves changes to lots of files and to many places in the program architecture which are all interconnected. It's not possible to undo them one by one. – Fy Zn Aug 12 '13 at 01:36
  • And btw... it is possible that you are out of memory. 32bit Windows is quite limited when it comes to memory usage. Open the resource monitor and watch your memory usage. – Greenflow Aug 12 '13 at 01:38
  • @user2011734, I highly doubt that (with ~600 MB used by other processes, 2 GB of physical memory and a total limit of 4 GB (3+1 user/system) virtual memory on 32 bit). The same amount of allocated memory used to always work just fine on this exact machine. – Fy Zn Aug 12 '13 at 01:41
  • 1
    If you don't explicitly configure your compiler that new returns a NULL pointer on an error, it does not. It throws an exception. And if you don't catch this exception, your program goes bye bye. – Greenflow Aug 12 '13 at 01:44
  • @paxdiablo, thanks for your link. That problem is indeed very similar, but there's no answer in the comments as to why a relatively small memory amount (relative to the free memory available) cannot be allocated. – Fy Zn Aug 12 '13 at 01:47
  • http://chris.pirillo.com/32-bit-windows-and-4gb-of-ram/ – Greenflow Aug 12 '13 at 01:48
  • @user2011734, I'm way below even 2 GB mark with this one. ~ 1.2 GB max usage (650 MB other processes + this one) on a machine with 2 GB of physical RAM. – Fy Zn Aug 12 '13 at 01:50
  • If you say so. :-) If I were you I'd check with the resource monitor. You might be surprised. Here another link: http://stackoverflow.com/questions/5686459/what-is-the-maximum-memory-available-to-a-c-application-on-32-bit-windows – Greenflow Aug 12 '13 at 01:56
  • Here's the screenshot from Process Explorer during the program's run (taken a couple seconds after the crash, so it should be in the far right in the graphs). http://i.imgur.com/wpliIZ4.png – Fy Zn Aug 12 '13 at 02:46
  • 1
    So you're trying to allocate 450 MB with a single malloc/new? That can fail anytime even if the total free heap RAM is sufficient. Doing it via a single new/malloc requires 450 of *contiguous* heap memory. As the heap is fragmented, there might be no fragment available in that size. – Frank Osterfeld Aug 12 '13 at 08:14

1 Answers1

3

Memory allocations fail due to lack of address space, not lack of RAM. Lack of RAM causes slow programs, and lots of disk thrashing, as programs get paged out to disk.

/LARGEADDRESSAWARE tells the OS that your app can accept a larger address space. Win64 will provide Win32 apps with 3GB of address space, on Win32 this isn't standard.

Turning off ASLR can help, as it will load DLLs in a linear fashion (thus at predictable offsets, which is a security risk). With ASLR, DLL's are scattered in memory. With only 10 DLLs's scattered through 2 GB of address space (and that is unrealistically low), the average space between them is ~200 MB.

MSalters
  • 173,980
  • 10
  • 155
  • 350