6

Is it possible to get the remaining available memory on a system (x86, x64, PowerPC / Windows, Linux or MacOS) in standard C++11 without crashing ?

A naive way would be to try allocating very large arrays starting by too large size, catch exceptions everytime it fails and decrease the size until no exception is thrown. But maybe there is a more efficient/clever method...

EDIT 1: In fact I do not need the exact amount of memory. I would like to know approximately (error bar of 100MB) how much my code could use when I start it.

EDIT 2 : What do you think of this code ? Is it secure to run it at the start of my program or it could corrupt the memory ?

#include <iostream>
#include <array>
#include <list>
#include <initializer_list>
#include <stdexcept>

int main(int argc, char* argv[])
{
    static const long long int megabyte = 1024*1024;
    std::array<char, megabyte> content({{'a'}});
    std::list<decltype(content)> list1;
    std::list<decltype(content)> list2;
    const long long int n1 = list1.max_size();
    const long long int n2 = list2.max_size();
    long long int i1 = 0;
    long long int i2 = 0;
    long long int result = 0;
    for (i1 = 0; i1 < n1; ++i1) {
        try {
            list1.push_back(content);
        }
        catch (const std::exception&) {
            break;
        }
    }
    for (i2 = 0; i2 < n2; ++i2) {
        try {
            list2.push_back(content);
        }
        catch (const std::exception&) {
            break;
        }
    }
    list1.clear();
    list2.clear();
    result = (i1+i2)*sizeof(content);
    std::cout<<"Memory available for program execution = "<<result/megabyte<<" MB"<<std::endl;
    return 0;
}
Vincent
  • 57,703
  • 61
  • 205
  • 388
  • Sorry, this time the "no" out of the two possible results of "maybe" has come out. –  Feb 21 '13 at 17:11
  • That is highly platform dependent, and not dealt with in the standard. – David Rodríguez - dribeas Feb 21 '13 at 17:12
  • There is no "standard" way to do this. Even the method you describe may not return valid results. You must use platform-specific functionality. – Nik Bougalis Feb 21 '13 at 17:12
  • But at least, will the naive way work ? – Vincent Feb 21 '13 at 17:12
  • No - consider a 64-bit platform for example, that allows 32-bit software to run. It has 32GB of memory but 32-bit programs can't access that much memory. Or consider a platform where the admins can enforce quotas on how much memory a program can have. – Nik Bougalis Feb 21 '13 at 17:13
  • @Vincent: Or consider platforms (linux is one of them) where the OS will *give* you all the memory you request and only fail when it cannot accommodate for your needs on a page fault. – David Rodríguez - dribeas Feb 21 '13 at 17:15
  • @NikBougalis : According to my EDIT, I would like, in that case, the amount of remaining memory for the software (not the remaining memory on the platform). – Vincent Feb 21 '13 at 17:17
  • The amount of memory may change between the 'start' time and when you actually need the memory. Just allocate what you need, when you need it. If the allocation fails, do whatever is sensible for your application - go into low performance mode or crash or even launch all missiles. Or just exit with a "low/out of memory" error. That works too. – Nik Bougalis Feb 21 '13 at 17:20
  • In fact it is for a supercomputer-oriented code (and I need to use only standard C++11) : during the execution, the used memory increase slowly, step by step. Sometimes, the code save its data to be able to restart. I would like to know in advance, when the memory will be about to be saturated to adjust some parameters and to save the data just before crash... – Vincent Feb 21 '13 at 17:23
  • I added an example code... – Vincent Feb 21 '13 at 18:24
  • Possible duplicate of [How to get available memory C++/g++?](http://stackoverflow.com/questions/2513505/how-to-get-available-memory-c-g) – Ciro Santilli OurBigBook.com Nov 03 '15 at 15:43

4 Answers4

9

This is highly dependent on the OS/platform. The approach that you suggest need not even work in real life. In some platforms the OS will grant you all your memory requests, but not really give you the memory until you use it, at which point you get a SEGFAULT...

The standard does not have anything related to this.

David Rodríguez - dribeas
  • 204,818
  • 23
  • 294
  • 489
  • Segfault would not be a likely outcome. On Mac OS X, the process would be suspended. In any case, the machine would usually start to "thrash" first. – Potatoswatter Feb 21 '13 at 17:19
  • @Potatoswatter: Try a linux box, remove the swap file, allocate a large enough dynamic array and walk through it touching every, say 4096th byte to touch all memory pages. At least in the past it would eventually die with a SEGFAULT – David Rodríguez - dribeas Feb 21 '13 at 17:24
  • @DavidRodríguez-dribeas -- That's Linux, OSX is different. It's "highly dependent", as you said. The key point is that on some systems one can successfully allocate far more memory than is available even in virtual memory, where "success" means no throw from new, or a non-null pointer from malloc. The problem may not show up until well after the bogus quantity of memory was "successfully" allocated. – David Hammen Feb 21 '13 at 17:30
  • @DavidHammen: Right.... what I am not sure is why Mac OS X is being brought so insistently. I just mention that the approach would fail in some OS-s (maybe I should have explicitly mentioned linux there?) with this particular behavior. I did not mention OSX, and the question is clearly multiplatform. – David Rodríguez - dribeas Feb 21 '13 at 18:05
  • @DavidRodríguez-dribeas Is removing the live swapfile the supported way of disabling virtual memory? Sounds like the machine is destined to crash anyway if you do that. What about the swapped-out pages it already contains? – Potatoswatter Feb 22 '13 at 00:03
  • @Potatoswatter: I guess I was too generic. It is usually a swap partition (not file), and you need to umount it. You cannot umount it while in use, you need to set the system not to mount it after the next reboot cycle. Whether that is a recipe for disaster or not is a different thing. If you are in a controlled environment with enough memory, the swap can impose a performance penalty. If you don't need the swap, having it requires copying memory to and from the disk that would otherwise not be copied. – David Rodríguez - dribeas Feb 22 '13 at 14:18
  • ... also note that depending on the kernel, having too small a swap partition/file can also impose extra performance issues. It's been long since I played with this, but there was a time when the kernel changed and less than twice the memory would cause performance issues. – David Rodríguez - dribeas Feb 22 '13 at 14:20
2

It seems to me that the answer is no, you cannot do it in standard C++.

What you could do instead is discussed under How to get available memory C++/g++? and the contents linked there. Those are all platform specific stuff. It's not standard but it least it helps you to solve the problem you are dealing with.

Community
  • 1
  • 1
Ali
  • 56,466
  • 29
  • 168
  • 265
1

As others have mentioned, the problem is hard to precisely define, much less solve. Does virtual memory on the hard disk count as "available"? What about if the system implements a prompt to delete files to obtain more hard disk space, meanwhile suspending your program? (This is exactly what happens on OS X.)

The system probably implements a memory hierarchy which gets slower as you use more. You might try detecting the performance cliff between RAM and disk by allocating and initializing chunks of memory while using the C alarm interrupt facility or clock or localtime/mktime, or the C++11 clock facilities. Wall-clock time should appear to pass quicker as the machine slows down under the stress of obtaining memory from less efficient resources. (But this makes the assumption that it's not stressed by anything else such as another process.) You would want to tell the user what the program is attempting, and save the results to an editable configuration file.

Potatoswatter
  • 134,909
  • 25
  • 265
  • 421
1

I would advise using a configurable maximum amount of memory instead. Since some platforms overcommit memory, it's not easy to tell how much memory you will actually have access to. It's also not polite to assume that you have exclusive access to 100% of the memory available, many systems will have other programs running.

Dirk Holsopple
  • 8,731
  • 1
  • 24
  • 37