0

An object tries to allocate more memory then the allowed virtual address space (2Gb on win32). The std::bad_alloc is caught and the the object released. Process memory usage drops and the process is supposed to continue; however, any subsequent memory allocation fails with another std::bad_alloc. Checking the memory usage with VMMap showed that the heap memory appears to be released but it is actually marked as private, leaving no free space. The only thing to do seems to quit and restart. I would understand a fragmentation problem but why can't the process have the memory back after the release?

The object is a QList of QLists. The application is multithreaded. I could make a small reproducer but I could reproduce the problem only once, while most of the times the reproduces can use again the memory that was freed.

Is Qt doing something sneaky? Or maybe is it win32 delaying the release?

Narcolessico
  • 1,921
  • 3
  • 19
  • 19
  • win32 never "delaying" the release - if you call `VirtualFree(p, 0, MEM_RELEASE)` - memory will be released when function return (in case of course `p` is correct) - so 100% `VirtualFree` was not called or called with bad arg – RbMm Dec 06 '16 at 11:39
  • A `QList` of `QList`s of **what**? That's the important thing here. – Kuba hasn't forgotten Monica Dec 06 '16 at 14:35
  • Of `QVariant`. It's basically a spreadsheet. It usually contains numbers in form of strings or URLs. – Narcolessico Dec 06 '16 at 16:50

2 Answers2

1

As I understand your problem, you are allocating large amounts of memory from heap which fails at some point. Releasing the memory back to the process heap does not necesarily mean that the heap manager actually frees the virtual pages that contain only free blocks of the heap (due to performance reasons). So, if you try to allocate a virtual memory directly (VirtualAlloc or VirtualAllocEx), the attempt fails since nearly all memory is consumed by the heap manager that has no chance of knowing about your direct allocation attempt.

Well, what you can possibly do with this. You can create your own heap (HeapCreate) and limit its maximum size. That may be quite tricky, since you need to persuade Qt to use this heap.

When allocating large amounts of memory, I recommend using VirtualAlloc rather than heap functions. If the requested size is >= 512 KB, the heap mamanger actually uses VirtualAlloc to satisfy your request. However, I don't know if it actually releases the pages when you free the region, or whether it starts using it for satisfying other heap allocation requests.

Martin Drab
  • 667
  • 4
  • 6
  • "The object is a QList of QLists." Nobody is doing any manual WINAPI calls. – Kuba hasn't forgotten Monica Dec 06 '16 at 14:36
  • Then, I would suggest to look into QList's source to see where exactly the allocation failure occurs and whether Qt handles this type of exceptions correctly. I saw quite many programmers adopting a do-not-care strategy for out of memory exceptions. Don't know which "strategy" is used in Qt though. – Martin Drab Dec 06 '16 at 15:18
1

The answer by Martin Drab put me on the right path. Investigating about the heap allocations I found this old message that clarifies what is going on:

The issue here is that the blocks over 512k are direct calls to VirtualAlloc, and everything else smaller than this are allocated out of the heap segments. The bad news is that the segments are never released (entirely or partially) so ones you take the entire address space with small blocks you cannot use them for other heaps or blocks over 512 K.

The problem is not Qt-related but Windows-related; I could finally reproduce it with a plain std::vector of char arrays. The default heap allocator leaves the address space segments unaltered even after the correspondent allocation was explicitly released. The ratio is that the process might ask again buffers of a similar size and the heap manager will save time reusing existent address segments instead of compacting older ones to create new ones.

Please note this has nothing to do with the amount of physical nor virtual memory available. It's only the address space that remains segmented, even though those segments are free. This is a serious problem on 32 bit architectures, where the address space is only 2Gb large (can be 3).

This is why the memory was marked as "private", even after being released, and apparently not usable by the same process for average-sized mallocs even though the committed memory was very low.

To reproduce the problem, just create a huge vector of chunks smaller than 512Kb (they must be allocated with new or malloc). After the memory is filled and then released (no matter if the limit is reached and an exception caught or the memory is just filled with no error), the process won't be able to allocate anything bigger than 512Kb. The memory is free, it's assigned to the same process ("private") but all the buckets are too small.

But there are worse news: there is apparently no way to force a compaction of the heap segments. I tried with this and this but had no luck; there is no exact equivalent of POSIX fork() (see here and here). The only solution is to do something more low level, like creating a private heap and destroying it after the small allocations (as suggested in the message cited above) or implementing a custom allocator (there might be some commercial solution out there). Both quite infeasible for large, existent software, where the easiest solution is to close the process and restart it.

Community
  • 1
  • 1
Narcolessico
  • 1,921
  • 3
  • 19
  • 19