86

I want to allocate my buffers according to memory available. Such that, when I do processing and memory usage goes up, but still remains in available memory limits. Is there a way to get available memory (I don't know will virtual or physical memory status will make any difference ?). Method has to be platform Independent as its going to be used on Windows, OS X, Linux and AIX. (And if possible then I would also like to allocate some of available memory for my application, someone it doesn't change during the execution).

Edit: I did it with configurable memory allocation. I understand it is not good idea, as most OS manage memory for us, but my application was an ETL framework (intended to be used on server, but was also being used on desktop as a plugin for Adobe indesign). So, I was running in to issue of because instead of using swap, windows would return bad alloc and other applications start to fail. And as I was taught to avoid crashes and so, was just trying to degrade gracefully.

theCakeCoder
  • 1,088
  • 1
  • 11
  • 20
  • 13
    There is no point in doing this. On all modern OS the memory used by one application does not affect the memory available for other applications as it is all virtual. Only allocate what you require. – Martin York Mar 25 '10 at 14:29
  • 9
    @LokiAstari: false of course. a system has only so much it can allocate. I chose to have no swap files, so my system has 8GiB, after that, C++ calls to `new` throw `bad_alloc` and other application fails. In linux and recent windows there is an OOM Killer that will chose an app to kill. a virus could allocate lots of stuff in multiple process and use that fact to crash other applications. Not to mention, if you have a page file, the system will trash and freeze to unusability. (usually the WM only dies but on windows there is no Ctrl-Alt-F1) – v.oddou Nov 20 '14 at 07:26
  • 1
    @v.oddou: None of that is relevant to the context of the question. Thus my comment stands. – Martin York Nov 20 '14 at 21:39
  • 2
    @v.oddou The Linux OOM killer would actually kill the imagenary virus pretty quickly; low uptime, low CPU usage, high memory usage, many child processes. This useless virus would basically be painting a big red cross on it's chest, and on it's children's. – yyny May 28 '16 at 14:03
  • 2
    @Loki Astari Not everyone is here for the same reason, and it's pretry useful to write a garbage collector which collects more often when low on memory. – yyny May 28 '16 at 14:05
  • 1
    @MartinYork did nobody tell you that making assumptions is a dumb thing to do? Example use case is to check free system memory ... if the server is running low the application could shut down non-essentially services or restart the httpd service - in my particular use case. – Sir Rogers May 22 '18 at 23:57
  • @martin-york (Old comment, but you didn't delete it.) Windows will start swapping if one program uses too much memory, thus slowing down the entire system and all other apps. Therefore it might be necessary to limit memory usage. – Elliot Feb 18 '19 at 11:26
  • @Elliot That's not new information. OS have been doing that since the 60's – Martin York Feb 18 '19 at 20:41
  • @Elliot The size of the swap file is limited in most cases, and its existence is not guaranteed. Getting available heap is essential for sandbox games like Minecraft clones. – Kotauskas Apr 22 '19 at 17:43
  • It's rather important for database systems running on a server to use all available memory for page caches. Sometimes the server might be running another load which uses a chunk of memory, i.e. a nightly script that does some work, or updates, etc. Being able to monitor memory usage and dynamically change the page cache size is a rather important feature of a database engine. – Mike Marynowski Sep 16 '19 at 23:17
  • Are you interested about how bad latency you get with the memory? For example, Linux supports multi-stage swap setup where you could have 32 GB of real RAM, 64 GB of really fast SRAM based swap, 256 GB of SSD based swap and 1 TB of HDD swap (and Linux kernel automatically uses the fastest swap that's not yet full). The system *can run* any program that uses up to 1.3 TB of RAM but the performance will be really bad for programs that huge. Just make it user configurable and you'll be fine and default to *minimum* you can cope with. – Mikko Rantalainen Aug 24 '21 at 07:31
  • There's *no platform independent way to get maximum available amount of RAM that doesn't reduce performance*. In practice, that's also a moving limit because it depends on what other processes are running or are going to be run (e.g. cron on POSIX) so if you truly need *maximum* that must be user configurable value and you should *default* to minimum your application can cope with. That way user can always improve the performance if they know they have extra RAM to use to speed up the process. – Mikko Rantalainen Aug 24 '21 at 07:37
  • For example, Linux has `MemAvailable` data that can be read from `/proc/meminfo` but that's, too, is only an estimate. Linux kernel is pretty cautious with that number and you can be pretty sure that the amount of memory declared by that value is correct *for that given moment of time*. In a busy system, if you actually try to use that amount of memory 1 second later, it may have been already taken by some other process. The `MemAvailable` value is intended to be the max amount of RAM you can get without the system slowing down because of extra IO required to support bigger programs. – Mikko Rantalainen Aug 24 '21 at 07:39

10 Answers10

173

On UNIX-like operating systems, there is sysconf.

#include <unistd.h>

unsigned long long getTotalSystemMemory()
{
    long pages = sysconf(_SC_PHYS_PAGES);
    long page_size = sysconf(_SC_PAGE_SIZE);
    return pages * page_size;
}

On Windows, there is GlobalMemoryStatusEx:

#include <windows.h>

unsigned long long getTotalSystemMemory()
{
    MEMORYSTATUSEX status;
    status.dwLength = sizeof(status);
    GlobalMemoryStatusEx(&status);
    return status.ullTotalPhys;
}

So just do some fancy #ifdefs and you'll be good to go.

Travis Gockel
  • 26,877
  • 14
  • 89
  • 116
  • I don't think you'll get downvoted for that since it *is* actually useful. We can suggest it's a bad idea as much as we like but, if someone really wants to do something foolish (though I hesitate to use that word), who are we to deny them the tools? – paxdiablo Mar 25 '10 at 06:56
  • 13
    its not that i want to use up all the memory, its that i don't want to go and load too much data which i can't process with the available memory (i want to remain inside unused or some space which will not probably be accessed by other processes). Again, i don't want to be foolish to want to allocate all the available memory but want to decide what limit should i put on application so it doesn't suck up all memory and get crashed ~___~ – theCakeCoder Mar 25 '10 at 07:45
  • 1
    For some operating systems `sysctl` may be a better alternative to `sysconf`. See `man 3 sysctl`. – Paul R Mar 25 '10 at 08:01
  • @Agito: On a 32-bit machine, it is safe to assume you have 2GB of space to work with. If the user does not have this much physical memory, the operating system will handle all the virtual memory for you (that's the great thing about PCs). But it is nice to do some heuristics to see how much you *should* take up...but it is a very risky game, since the question "How much memory does this computer have?" is surprisingly poorly defined. – Travis Gockel Mar 25 '10 at 08:40
  • @Paul R: I had no idea there was yet another there was yet another way! – Travis Gockel Mar 25 '10 at 08:41
  • 3
    This is useful for a tangentially related task I have: to warn the user when they're using a significant fraction of physical memory. I know I can use more and have virtual memory managed, but if I am using more than the amount of physical RAM I want to be able to warn the user, because this will result in a slow-down because of the resulting paging that will occur. – Chris Westin Oct 13 '11 at 21:06
  • 5
    Minor nitpick: `status.ullTotalPhys` is an `unsigned long long`; if the method's return type is long then on some systems you'll get nonsensical results. Running the code as-is results in a return value of `-729088` on my system, but changing it to match the type of ullTotalPhys results in the correct `21474107392`. – Showtime Sep 26 '12 at 22:45
  • 2
    That's a good point...I changed `long` to `size_t` in the code, which should be fine on any system with non-segmented addressing. – Travis Gockel Sep 26 '12 at 22:52
  • Of course it is useful. Who hasn't opened his system page (`Win`+`Pause` shortcut on MS OS) and drooled "yaay I have 16 GB bwaaaah". So at least it serves the purpose of system info. – v.oddou Nov 20 '14 at 07:31
  • This code seems to return a valid value, but I am not sure It's working properly. I have being running this code for several days on a Grid of several nodes with MPI. The returned value has been the same all days and for all the MPI nodes. – user9869932 Sep 10 '15 at 22:39
  • @julianromera: I think that is expected -- `sysconf(_SC_PHYS_PAGES)` is fetching the amount of memory for the system, which typically doesn't change frequently. – Travis Gockel Sep 17 '15 at 19:44
  • thought i might add: if you're on Code::Blocks, put `#define _WIN32_WINNT 0x0500` before `#include ` – FluorescentGreen5 Nov 14 '16 at 00:52
  • @Travis Gockel the Windows function `getTotalSystemMemory` does not tell the memory available to a C (in my case) program. My MSVC 2015 compiler allows a maximum approx 2GB to be allocated. But the function you cite gives the total memory available in the system, not the memory available to the program. – Weather Vane May 15 '17 at 20:08
  • 5
    It's worth noting that `_SC_PHYS_PAGES` is *not* part of the POSIX spec. – Craig Barnes Aug 09 '18 at 05:23
35

There are reasons to do want to do this in HPC for scientific software. (Not game, web, business or embedded software). Scientific software routinely go through terabytes of data to get through one computation (or run) (and run for hours or weeks) -- all of which cannot be stored in memory (and if one day you tell me a terabyte is standard for any PC or tablet or phone it will be the case that the scientific software will be expected to handle petabytes or more). The amount of memory can also dictate the kind of method/algorithm that makes sense. The user does not always want to decide the memory and method - he/she has other things to worry about. So the programmer should have a good idea of what is available (4Gb or 8Gb or 64Gb or thereabouts these days) to decide whether a method will automatically work or a more laborious method is to be chosen. Disk is used but memory is preferable. And users of such software are not encouraged to be doing too many things on their computer when running such software -- in fact, they often use dedicated machines/servers.

Paresh M
  • 351
  • 3
  • 2
  • It was not exactly scientific software, rater I was than building an ETL framework, off course it was intended to run on dedicated servers. Probably, it needed to ave a maximum allowed memory like, Java or Maltab takes as start-up parameter. – theCakeCoder Sep 20 '14 at 14:29
  • 2
    There are reasons to do this with rendering software at least. You want to use as much memory as you have. for example: the physical memory available (×α with 0.5<α<0.8) will be the limit for the photon map size. + some `min(physi, 2GiB)` to avoid that machines with 256GB of RAM takes forever to build the photonmap. but still. You can also imagine roaming in games, I have seen engines streaming IN and OUT assets to maintain a memory target. the more memory you have the farther you can see. – v.oddou Nov 20 '14 at 07:36
16

There is no platform independent way to do this, different operating systems use different memory management strategies.

These other stack overflow questions will help:

You should watch out though: It is notoriously difficult to get a "real" value for available memory in linux. What the operating system displays as used by a process is no guarantee of what is actually allocated for the process.

This is a common issue when developing embedded linux systems such as routers, where you want to buffer as much as the hardware allows. Here is a link to an example showing how to get this information in a linux (in C):

Community
  • 1
  • 1
mikelong
  • 3,694
  • 2
  • 35
  • 40
13

Having read through these answers I'm astonished that so many take the stance that OP's computer memory belongs to others. It's his computer and his memory to do with as he sees fit, even if it breaks other systems taking a claim it. It's an interesting question. On a more primitive system I had memavail() which would tell me this. Why shouldn't the OP take as much memory as he wants without upsetting other systems?

Here's a solution that allocates less than half the memory available, just to be kind. Output was:

Required FFFFFFFF

Required 7FFFFFFF

Required 3FFFFFFF

Memory size allocated = 1FFFFFFF

#include <stdio.h>
#include <stdlib.h>

#define MINREQ      0xFFF   // arbitrary minimum

int main(void)
{
    unsigned int required = (unsigned int)-1; // adapt to native uint
    char *mem = NULL; 
    while (mem == NULL) {
        printf ("Required %X\n", required);
        mem = malloc (required);
        if ((required >>= 1) < MINREQ) {
            if (mem) free (mem);
            printf ("Cannot allocate enough memory\n");
            return (1);
        }
    }

    free (mem);
    mem = malloc (required);
    if (mem == NULL) {
        printf ("Cannot enough allocate memory\n");
        return (1);
    }
    printf ("Memory size allocated = %X\n", required);
    free (mem);
    return 0;
}
Weather Vane
  • 33,872
  • 7
  • 36
  • 56
  • 2
    on linux you can use the binutils `free` command (or is it a bash command? maybe) you can launch using `execve` or `system`. A fun approach could also try to allocate (and write to 1) until failure to detect the memory available. not to mention checking speed so that swapping is detected. – v.oddou Nov 20 '14 at 07:29
  • 39
    This is an extremely *horrible* solution. Imagine that you are working on a computer, and suddenly it starts swapping and slows down to a crawl, and some applications fail due to insufficient memory, and a network connection fails, etc. You panic that you have malware, shutdown or run antivirus, and find out that this was cause by some dumb application just constantly allocating and freeing humongous amounts of memory that it doesn't even need. – Michael Mar 25 '16 at 16:13
  • @Michael I think I am inclined to agree with you, but then I would not develop code on a machine where anything running really mattered. – Weather Vane Mar 25 '16 at 16:27
  • 4
    Trying to `malloc()` and determine the available memory is a terrible approach to the solution of the question; far from being optimum and usable.... What stops you from using `sysctl()` family of functions and get some readings from OS tunables? Also, the concept of _free memory_ changes from operating system to operating system, as, for instance, FreeBSD and, AFAIK OS X as well, considers unused memory as wasted and uses the memory for some _useful stuff_ (answer to this is out of scope of this topic). Have a look at this https://www.freebsd.org/cgi/man.cgi?query=sysctl&sektion=3 – fnisi May 23 '16 at 03:47
  • 1
    @FehmiNoyan I agree it is not very elegant, but Windows API does not have any `sysctl` family of functions (correct me if I am wrong). In the old days with Borland Turbo C there was `memavail`, but MSVC does not seem to have the equivalent. In another well-received answer `GlobalMemoryStatusEx` was suggested, but programs compiled with MSVC only allow the program about 2Gb of memory anyway. My system has 8Gb. How is that going to affect other programs? But if I need it, and other apps are stopping me from doing as I please, on my own PC, I will close them. – Weather Vane May 23 '16 at 16:46
  • @Michael the way i would suggest would be to only use, say 10%, of available memory, or to only use lots of available memory for a very short period of time if it's an action triggered by the user, rather than a background action. – FluorescentGreen5 Nov 13 '16 at 23:46
  • 1
    With virtual memory, it's possible to malloc more virtual memory than you actually have physically available. malloc() may tell you that you have that memory, but each virtual page only gets a physical page allocated to it when you actually use that memory (triggering a page fault exception, which the OS handles by assigning a free page; possibly swapping something out or dumping cache to free up a page for you). So, you may find that if you try to actually use all of that memory you allocated, you'll start thrashing or even get killed or cause something else to be killed by the OS. – jtchitty May 15 '17 at 18:45
  • @jtchitty so your point is that there *isn't* a way to determine the amount of memory available. I acknowledge that my answer isn't great, but is based on the approx 2GB total memory available to my version of MSVC compiled programs, however that memory is obtained. For example, if there is already 1GB static memory allocated, only approx 1GB is available on the heap. Some have commented that this technique will slug other apps running: not if there is another say 6GB available to *them*. – Weather Vane May 15 '17 at 18:56
  • Mainly, I just wanted to share some knowledge. :) But, yes, I think you've drawn the right conclusion. I don't think there is a good, portable way to find out how much memory is available. The best way I can think of on Linux systems is to use /proc/meminfo, but that's already been said. I'm not a Windows developer, but I know there's no /proc on there (unless maybe you're using Windows Subsystem for Linux). The differences between Linux and other POSIX systems is better described by someone other than me. – jtchitty May 15 '17 at 19:37
  • @jtchitty thanks, please see my recent comment under the most-rep answer. – Weather Vane May 15 '17 at 20:15
  • 1
    With linux you can `malloc()` nearly infinite amounts of RAM as long as you don't write to any of that memory. This is because Linux uses virtual memory that's zeroed by default and only reserves actual RAM when you modify the contents. This has been done because surprisingly many old POSIX compatible programs allocate huge amounts of memory but only write to some of it. – Mikko Rantalainen Aug 24 '21 at 07:28
9

Mac OS X example using sysctl (man 3 sysctl):

#include <stdio.h>
#include <stdint.h>
#include <sys/types.h>
#include <sys/sysctl.h>

int main(void)
{
    int mib[2] = { CTL_HW, HW_MEMSIZE };
    u_int namelen = sizeof(mib) / sizeof(mib[0]);
    uint64_t size;
    size_t len = sizeof(size);

    if (sysctl(mib, namelen, &size, &len, NULL, 0) < 0)
    {
        perror("sysctl");
    }
    else
    {
        printf("HW.HW_MEMSIZE = %llu bytes\n", size);
    }
    return 0;
}

(may also work on other BSD-like operating systems ?)

Paul R
  • 208,748
  • 37
  • 389
  • 560
5

The code below gives the total and free memory in Megabytes. Works for FreeBSD, but you should be able to use same/similar sysctl tunables on your platform and do to the same thing (Linux & OS X have sysctl at least)

#include <stdio.h>
#include <errno.h>

#include <sys/types.h>
#include <sys/sysctl.h>
#include <sys/vmmeter.h>

int main(){
    int rc;
    u_int page_size;
    struct vmtotal vmt;
    size_t vmt_size, uint_size; 

    vmt_size = sizeof(vmt);
    uint_size = sizeof(page_size);

    rc = sysctlbyname("vm.vmtotal", &vmt, &vmt_size, NULL, 0);
    if (rc < 0){
       perror("sysctlbyname");
       return 1;
    }

    rc = sysctlbyname("vm.stats.vm.v_page_size", &page_size, &uint_size, NULL, 0);
    if (rc < 0){
       perror("sysctlbyname");
       return 1;
    }

    printf("Free memory       : %ld\n", vmt.t_free * (u_int64_t)page_size);
    printf("Available memory  : %ld\n", vmt.t_avm * (u_int64_t)page_size);

    return 0;
}

Below is the output of the program, compared with the vmstat(8) output on my system.

~/code/memstats % cc memstats.c 
~/code/memstats % ./a.out 
Free memory       : 5481914368
Available memory  : 8473378816
~/code/memstats % vmstat 
 procs      memory      page                    disks     faults         cpu
 r b w     avm    fre   flt  re  pi  po    fr  sr ad0 ad1   in   sy   cs us sy id
 0 0 0   8093M  5228M   287   0   1   0   304 133   0   0  112 9597 1652  2  1 97
fnisi
  • 1,181
  • 1
  • 14
  • 24
4

Linux currently free memory: sysconf(_SC_AVPHYS_PAGES) and get_avphys_pages()

The total RAM was covered at https://stackoverflow.com/a/2513561/895245 with sysconf(_SC_PHYS_PAGES);.

Both sysconf(_SC_AVPHYS_PAGES) and get_avphys_pages() are glibc extensions to POSIX that give instead the total currently available RAM pages.

You then just have to multiply them by sysconf(_SC_PAGE_SIZE) to obtain the current free RAM.

Minimal runnable example at: C - Check available free RAM?

Ciro Santilli OurBigBook.com
  • 347,512
  • 102
  • 1,199
  • 985
3

The "official" function for this is was std::get_temporary_buffer(). However, you might want to test whether your platform has a decent implemenation. I understand that not all platforms behave as desired.

MSalters
  • 173,980
  • 10
  • 155
  • 350
1

Instead of trying to guess, have you considered letting the user configure how much memory to use for buffers, as well as assuming somewhat conservative defaults? This way you can still run (possibly slightly slower) with no override, but if the user know there is X memory available for the app they can improve performance by configuring that amount.

Mark B
  • 95,107
  • 10
  • 109
  • 188
1

Here is a proposal to get available memory on Linux platform:

/// Provides the available RAM memory in kibibytes (1 KiB = 1024 B) on Linux platform (Available memory in /proc/meminfo)
/// For more info about /proc/meminfo : https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-meminfo
long long getAvailableMemory()
{
  long long memAvailable = -1;
  std::ifstream meminfo("/proc/meminfo");
  std::string line;
  while (std::getline(meminfo, line))
  {
    if (line.find("MemAvailable:") != std::string::npos)
    {
      const std::size_t firstWhiteSpacePos = line.find_first_of(' ');
      const std::size_t firstNonWhiteSpaceChar = line.find_first_not_of(' ', firstWhiteSpacePos);
      const std::size_t nextWhiteSpace = line.find_first_of(' ', firstNonWhiteSpaceChar);
      const std::size_t numChars = nextWhiteSpace - firstNonWhiteSpaceChar;
      const std::string memAvailableStr = line.substr(firstNonWhiteSpaceChar, numChars);
      memAvailable = std::stoll(memAvailableStr);
      break;
    }
  }

  return memAvailable;
}
Mitchou
  • 15
  • 4