My 16GB machine behaves as if it's out of memory (applications report allocation failures, ksqwapd goes crazy despite no swap partition mounted) even though system monitors such as top
report there is plenty of memory available.
So I wrote this program to allocate memory to exhaustion and report the amount allocated.
#include <iostream>
#include <stdexcept>
const size_t STRIDE = 64;
int main ()
{
for (size_t n = 0; ; ++n)
{
try
{
char * ptr = new char [1024*1024];
for (char * p = ptr; p < ptr + 1024*1024; p += STRIDE)
{
*p = 123;
}
if (n > 0 && 0 == (n % 1024))
{
std :: cout << (n / 1024) << " GiB" << std :: endl;
}
else if (n > 0 && 0 == (n % 512))
{
std :: cout << (n / 1024) << ".5 GiB" << std :: endl;
}
}
catch (const std :: exception & e)
{
std :: cout << n << "MiB total" << std :: endl;
std :: cout << "Caught: " << e .what () << std :: endl;
}
catch (...)
{
std :: cout << n << "MiB total" << std :: endl;
std :: cout << "Caught exception." << std :: endl;
}
}
return 0;
}
When I ran this program with over 10GiB of reported available memory, it received SIGKILL
after counting to 2GiB. Why would it receive a signal instead of throwing an exception?
Also, any idea why it only got to 2GiB? I ran it again after rebooting and it got to 13GiB (it should have got as far as 13.5GiB, since top
reports 13.9GiB available). What might stop all available memory from being granted?
Ubuntu with kernel 4.2.0-42-generic.
**Edit:* /proc/sys/vm/overcommit_memory is 0 (heuristic overcommit) but this doesn't explain why other programs seem to capture out-of-memory as an exception rather than being killed.