11

I have a computer with 128 GB of RAM, running Linux (3.19.5-200.fc21.x86_64). However, I cannot allocate more than ~30 GB of RAM in a single process. Beyond this, malloc fails:

#include <stdlib.h>
#include <iostream>

int main()
{
   size_t gb_in_bytes = size_t(1)<<size_t(30); // 1 GB in bytes (2^30).
   // try to allocate 1 block of 'i' GB.
   for (size_t i = 25; i < 35; ++ i) {
      size_t n = i * gb_in_bytes;
      void *p = ::malloc(n);
      std::cout << "allocation of 1 x " << (n/double(gb_in_bytes)) << " GB of data. Ok? " << ((p==0)? "nope" : "yes") << std::endl;
      ::free(p);
   }
}

This produces the following output:

/tmp> c++ mem_alloc.cpp && a.out

allocation of 1 x 25 GB of data. Ok? yes
allocation of 1 x 26 GB of data. Ok? yes
allocation of 1 x 27 GB of data. Ok? yes
allocation of 1 x 28 GB of data. Ok? yes
allocation of 1 x 29 GB of data. Ok? yes
allocation of 1 x 30 GB of data. Ok? yes
allocation of 1 x 31 GB of data. Ok? nope
allocation of 1 x 32 GB of data. Ok? nope
allocation of 1 x 33 GB of data. Ok? nope
allocation of 1 x 34 GB of data. Ok? nope

I searched for quite some time, and found that this is related to the maximum virtual memory size:

~> ulimit -all
[...]
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
virtual memory          (kbytes, -v) 32505856
[...]

I can increase this limit to ~64 GB via ulimit -v 64000000, but not further. Beyond this, I get operation not permitted errors:

~> ulimit -v 64000000
~> ulimit -v 65000000                                                                                                                                  
bash: ulimit: virtual memory: cannot modify limit: Operation not permitted                                                                              
~> ulimit -v unlimited
bash: ulimit: virtual memory: cannot modify limit: Operation not permitted 

Some more searching revealed that in principle it should be possible to set these limits via the "as" (address space) entry in /etc/security/limits.conf. However, by doing this, I could only reduce the maximum amount of virtual memory, not increase it.

Is there any way to either lift this limit of virtual memory per process completely, or to increase it beyond 64 GB? I would like to use all of the physical memory in a single application.

EDIT:

  • Following Ingo Leonhardt, I tried ulimits -v unlimited after logging in as root, not as standard user. Doing this solves the problem for root (the program can then allocate all the physical memory while logged in as root). But this works only for root, not for other users. However, at the least this means that in principle the kernel can handle this just fine, and that there is only a configuration problem.

  • Regarding limits.conf: I tried explicitly adding

    • hard as unlimited
    • soft as unlimited

    to /etc/security/limits.conf, and rebooting. This had no effect. After login as standard user, ulimit -v still returns about 32 GB, and ulimit -v 65000000 still says permission denied (while ulimit -v 64000000 works). The rest of limits.conf is commented out, and in /etc/security/limits.d there is only one other, unrelated entry (limiting nproc to 4096 for non-root users). That is, the virtual memory limit must be coming from elsewhere than limits.conf. Any ideas what else could lead to ulimits -v not being "unlimited"?

EDIT/RESOLUTION:

  • It was caused by my own stupidity. I had a (long forgotten) program in my user setup which used setrlimit to restrict the amount of memory per process to prevent Linux from swapping to death. It was unintentionally copied from a 32 GB machine to the 128 GB machine. Thanks to Paul and Andrew Janke and everyone else for helping to track it down. Sorry everyone :/.

  • If anyone else encounters this: Search for ulimit/setrlimit in the bash and profile settings, and programs potentially calling those (both your own and the system-wide /etc settings) and make sure that /security/limits.conf does not include this limit... (or at least try creating a new user, to see if this happens in your user or the system setup)

phuclv
  • 37,963
  • 15
  • 156
  • 475
cgk
  • 119
  • 1
  • 5
  • If someone wants to play with this without owning an 128GB ram machine, on Amazon EC2, `r3.4xlarge` has 122GB. It is $1.30/hr retail though, haven't checked the spot market. – Paul May 11 '15 at 16:47
  • Possibly relevant: http://stackoverflow.com/questions/8799481/single-process-maximum-possible-memory-in-x64-linux – Paul May 11 '15 at 16:49
  • Possibly relevant http://stackoverflow.com/questions/7582301/virtual-memory-size-on-linux – sabbahillel May 11 '15 at 16:51
  • 5
    I don't think C has `std::cout`... – crashmstr May 11 '15 at 16:53
  • http://www.linuxquestions.org/questions/linux-general-1/64-bit-linux-virtual-memory-limit-617945/ In a 64-bit os, the maximum memory space for a single process is 2^48 bits. (More is theoretically possible, but no chipset on the market actually decodes more than 48 bits of address.) See: http://en.wikipedia.org/wiki/X86-64#Virtual_address_space_details – sabbahillel May 11 '15 at 16:54
  • 5
    try calling `ulimit` as root. At least both `ulimit -v 65000000` and `ulimit -v unlimited` should then be successful – Ingo Leonhardt May 11 '15 at 16:56
  • Use `ulimit -v -H` to view the hard limit - after a fresh login this should show the value you configured in `limits.conf`. –  May 11 '15 at 16:56
  • This is no valid C code. You might decide wether to use C++ or C; mixing both languages in a single file is a bad idea. – too honest for this site May 11 '15 at 17:12
  • Please note that `1ull << 30 != 1GB`, but 1 *GiB*. – edmz May 11 '15 at 17:16
  • @Olaf: It's absolutely valid C++ to use `malloc`/`free`. – cfh May 11 '15 at 17:45
  • Checking `/etc/security/limits.conf` is also useful. I.e. I don't have vmem limits for a non-root user (but I neither have 128Gb of memory :( ) – myaut May 11 '15 at 17:45
  • `/etc/security/limits.conf` on a high memory EC2 machine is completely commented out and looks like a defaults file. – Paul May 11 '15 at 17:46
  • @cfh: The original tag was `C`, no `C++`-tag. However, it is still bad style using legacy paradigms like `malloc()` & Co in in C++. However, After my primary purpose was to remove the `C`-tag. Mission accomplished:-) – too honest for this site May 11 '15 at 18:10
  • @Olaf: `malloc` is OK in C++ when you need large chunks of raw memory (bad for allocating objects). You should use `mmap()` or `boost::interprocess`, but `malloc` should be ok. – myaut May 11 '15 at 18:30
  • Also, `new` seem to be broken: http://stackoverflow.com/questions/30171528/how-to-dynamically-allocate-big-memory-like-10-g-using-new-operator-in-c-on :D – myaut May 11 '15 at 18:35
  • @myaut: Just to state that finally clear: The question was originally tagged `C`, not `C++`, so I assumed C and that does certainly not include iostream (as @crashmstr also noted). My original point was _not:_ using malloc&co. As the tag has been changed now, I suppose someone (let's call him "Paul") did get the point of these comments. However, using malloc() in C++ is still legacy paradigm. The correct OOP approach would at least be to create a container class and `new` an instance (which would then possibly use `malloc()`). – too honest for this site May 11 '15 at 18:48
  • @Olaf Let's call him "Fred". I did change the tag, but I'm not the OP. – Paul May 11 '15 at 20:27
  • @IngoLeonhardt: You are right... it works fine if I change to root and set ulimit -v unlimited there. The program runs fine then and can happily allocate (and commit) 120 GB of memory.However, this only works for the root user. Any idea what to change in the system configuration to make this possible for non-root too? – cgk May 12 '15 at 18:24
  • @Olaf: This was is a linux problem, not a C or C++ problem. Sorry for tagging it wrong originally. I removed the tags. And, btw: If you want to know why I use malloc instead of, for example, std::vector xxx.resize: It's because with malloc I can get *uninitialized* memory, and doing so is MUCH faster for large memory blocks than default initializing something---especially if everything will be overwritten later. malloc is a perfectly okay choice for dealing with PODs. – cgk May 12 '15 at 18:25
  • @myaut: I checked /etc/security/limits.conf, it is currently commented out entirely. I tried putting in a "* hard as 124000000" line to set the new limit to ~124 GB, and rebooted, but that made no difference. On reboot, ulimit -v is still at 32 GB. – cgk May 12 '15 at 18:36
  • @cgk, but you should be able to set `ulimit` to values larger than 64Gb by using `ulimit -v`. – myaut May 12 '15 at 19:07
  • @myauth: There is no need in C++ to use a stdlib datatype. There are still options to get along without malloc and without initialization. However, it was mostly the C-tag which confused. So, unless you are one of my students, I really don't care. – too honest for this site May 12 '15 at 19:23
  • @myaut: I tried it again. I explicitly added both "* hard as unlimited" and "* soft as unlimited" to limits.conf and restarted, but I saw no change. After login, "ulimits -v" still says 32 GB, and and "ulimits -v 65000000" still says "permission denied" (while "ulimits -v 64000000" works). The limits must be coming from elsewhere than limits.conf. Any other ideas? – cgk May 12 '15 at 20:35
  • 1
    @cgk when you did that, did you reboot or only logout/login? What is `ulimit -v` when run in `/etc/rc.local` automatically at boot (not run manually by you)? You can edit `ulimit -v >/tmp/ulimit.out` into `/etc/rc.local` as root to find this out by looking in the `/tmp/ulimit.out` file that will create. This is an attempt to see if it is set on init and trickles down, or if it is set when you login. – Paul May 12 '15 at 21:22
  • @Paul: Thanks for the tip... that did it. The change to ulimits was caused by my user setup. Or, I should say, by my own stupidity... a few years: a few years ago I seem to have written a program which sets the virtual space limit in order to prevent linux from swapping to death...by calling setrlimit (as Andew Janke pointed out below) to restrict the total amount of virtual memory /o\. I copied this over from my laptop (which has 32 GB ram) to the new machine, without noticing it. It wasn't found because I searched for ulimits, not setrlimit. I feel very stupid now. Thanks everyone! Sorry. – cgk May 16 '15 at 00:22

1 Answers1

9

This is a ulimit and system setup problem, not a c++ problem.

I can run your appropriately modified code on an Amazon EC2 instance type r3.4xlarge with no problem. These cost less than $0.20/hour on the spot market, and so I suggest you rent one, and perhaps take a look around in /etc and compare to your own setup... or maybe you need to recompile a Linux kernel to use that much memory... but it is not a C++ or gcc problem.

Ubuntu on the EC2 machine was already set up for unlimited process memory.

$ sudo su
# ulimit -u
--> unlimited

This one has 125GB of ram

# free
             total       used       free     shared    buffers     cached
Mem:     125903992    1371828  124532164        344      22156     502248
-/+ buffers/cache:     847424  125056568
Swap:            0          0          0

I modified the limits on your program to go up to 149GB.

Here's the output. Looks good up to 118GB.

root@ip-10-203-193-204:/home/ubuntu# ./memtest
allocation of 1 x 25 GB of data. Ok? yes
allocation of 1 x 26 GB of data. Ok? yes
allocation of 1 x 27 GB of data. Ok? yes
allocation of 1 x 28 GB of data. Ok? yes
allocation of 1 x 29 GB of data. Ok? yes
allocation of 1 x 30 GB of data. Ok? yes
allocation of 1 x 31 GB of data. Ok? yes
allocation of 1 x 32 GB of data. Ok? yes
allocation of 1 x 33 GB of data. Ok? yes
allocation of 1 x 34 GB of data. Ok? yes
allocation of 1 x 35 GB of data. Ok? yes
allocation of 1 x 36 GB of data. Ok? yes
allocation of 1 x 37 GB of data. Ok? yes
allocation of 1 x 38 GB of data. Ok? yes
allocation of 1 x 39 GB of data. Ok? yes
allocation of 1 x 40 GB of data. Ok? yes
allocation of 1 x 41 GB of data. Ok? yes
allocation of 1 x 42 GB of data. Ok? yes
allocation of 1 x 43 GB of data. Ok? yes
allocation of 1 x 44 GB of data. Ok? yes
allocation of 1 x 45 GB of data. Ok? yes
allocation of 1 x 46 GB of data. Ok? yes
allocation of 1 x 47 GB of data. Ok? yes
allocation of 1 x 48 GB of data. Ok? yes
allocation of 1 x 49 GB of data. Ok? yes
allocation of 1 x 50 GB of data. Ok? yes
allocation of 1 x 51 GB of data. Ok? yes
allocation of 1 x 52 GB of data. Ok? yes
allocation of 1 x 53 GB of data. Ok? yes
allocation of 1 x 54 GB of data. Ok? yes
allocation of 1 x 55 GB of data. Ok? yes
allocation of 1 x 56 GB of data. Ok? yes
allocation of 1 x 57 GB of data. Ok? yes
allocation of 1 x 58 GB of data. Ok? yes
allocation of 1 x 59 GB of data. Ok? yes
allocation of 1 x 60 GB of data. Ok? yes
allocation of 1 x 61 GB of data. Ok? yes
allocation of 1 x 62 GB of data. Ok? yes
allocation of 1 x 63 GB of data. Ok? yes
allocation of 1 x 64 GB of data. Ok? yes
allocation of 1 x 65 GB of data. Ok? yes
allocation of 1 x 66 GB of data. Ok? yes
allocation of 1 x 67 GB of data. Ok? yes
allocation of 1 x 68 GB of data. Ok? yes
allocation of 1 x 69 GB of data. Ok? yes
allocation of 1 x 70 GB of data. Ok? yes
allocation of 1 x 71 GB of data. Ok? yes
allocation of 1 x 72 GB of data. Ok? yes
allocation of 1 x 73 GB of data. Ok? yes
allocation of 1 x 74 GB of data. Ok? yes
allocation of 1 x 75 GB of data. Ok? yes
allocation of 1 x 76 GB of data. Ok? yes
allocation of 1 x 77 GB of data. Ok? yes
allocation of 1 x 78 GB of data. Ok? yes
allocation of 1 x 79 GB of data. Ok? yes
allocation of 1 x 80 GB of data. Ok? yes
allocation of 1 x 81 GB of data. Ok? yes
allocation of 1 x 82 GB of data. Ok? yes
allocation of 1 x 83 GB of data. Ok? yes
allocation of 1 x 84 GB of data. Ok? yes
allocation of 1 x 85 GB of data. Ok? yes
allocation of 1 x 86 GB of data. Ok? yes
allocation of 1 x 87 GB of data. Ok? yes
allocation of 1 x 88 GB of data. Ok? yes
allocation of 1 x 89 GB of data. Ok? yes
allocation of 1 x 90 GB of data. Ok? yes
allocation of 1 x 91 GB of data. Ok? yes
allocation of 1 x 92 GB of data. Ok? yes
allocation of 1 x 93 GB of data. Ok? yes
allocation of 1 x 94 GB of data. Ok? yes
allocation of 1 x 95 GB of data. Ok? yes
allocation of 1 x 96 GB of data. Ok? yes
allocation of 1 x 97 GB of data. Ok? yes
allocation of 1 x 98 GB of data. Ok? yes
allocation of 1 x 99 GB of data. Ok? yes
allocation of 1 x 100 GB of data. Ok? yes
allocation of 1 x 101 GB of data. Ok? yes
allocation of 1 x 102 GB of data. Ok? yes
allocation of 1 x 103 GB of data. Ok? yes
allocation of 1 x 104 GB of data. Ok? yes
allocation of 1 x 105 GB of data. Ok? yes
allocation of 1 x 106 GB of data. Ok? yes
allocation of 1 x 107 GB of data. Ok? yes
allocation of 1 x 108 GB of data. Ok? yes
allocation of 1 x 109 GB of data. Ok? yes
allocation of 1 x 110 GB of data. Ok? yes
allocation of 1 x 111 GB of data. Ok? yes
allocation of 1 x 112 GB of data. Ok? yes
allocation of 1 x 113 GB of data. Ok? yes
allocation of 1 x 114 GB of data. Ok? yes
allocation of 1 x 115 GB of data. Ok? yes
allocation of 1 x 116 GB of data. Ok? yes
allocation of 1 x 117 GB of data. Ok? yes
allocation of 1 x 118 GB of data. Ok? yes
allocation of 1 x 119 GB of data. Ok? nope
allocation of 1 x 120 GB of data. Ok? nope
allocation of 1 x 121 GB of data. Ok? nope
allocation of 1 x 122 GB of data. Ok? nope
allocation of 1 x 123 GB of data. Ok? nope
allocation of 1 x 124 GB of data. Ok? nope
allocation of 1 x 125 GB of data. Ok? nope
allocation of 1 x 126 GB of data. Ok? nope
allocation of 1 x 127 GB of data. Ok? nope
allocation of 1 x 128 GB of data. Ok? nope
allocation of 1 x 129 GB of data. Ok? nope
allocation of 1 x 130 GB of data. Ok? nope
allocation of 1 x 131 GB of data. Ok? nope
allocation of 1 x 132 GB of data. Ok? nope
allocation of 1 x 133 GB of data. Ok? nope
allocation of 1 x 134 GB of data. Ok? nope
allocation of 1 x 135 GB of data. Ok? nope
allocation of 1 x 136 GB of data. Ok? nope
allocation of 1 x 137 GB of data. Ok? nope
allocation of 1 x 138 GB of data. Ok? nope
allocation of 1 x 139 GB of data. Ok? nope
allocation of 1 x 140 GB of data. Ok? nope
allocation of 1 x 141 GB of data. Ok? nope
allocation of 1 x 142 GB of data. Ok? nope
allocation of 1 x 143 GB of data. Ok? nope
allocation of 1 x 144 GB of data. Ok? nope
allocation of 1 x 145 GB of data. Ok? nope
allocation of 1 x 146 GB of data. Ok? nope
allocation of 1 x 147 GB of data. Ok? nope
allocation of 1 x 148 GB of data. Ok? nope
allocation of 1 x 149 GB of data. Ok? nope

Now, about that US$0.17 I spent on this...

Paul
  • 26,170
  • 12
  • 85
  • 119
  • Very interesting. Almost as interesting as what the OP plans to do with 128 GB of memory. – Carlton May 11 '15 at 17:41
  • @Carlton Another interesting question is why people charge so much for guaranteed VMs as opposed to the spot VMs that might go away. Is it so hard to make one into the other? – Paul May 11 '15 at 17:43
  • @paul How do you turn spot into guaranteed? Without gambling. – Yakk - Adam Nevraumont May 11 '15 at 17:48
  • @Yakk Gambling was good enough for the mortgage market and the US Treasury.... but yeah, basically rent 2 and failover and as soon as one gets the shutdown notification switch to the other and make a new backup. The difference is about 6x in cost... so there is room for something contorted to work. Gamble being 2 don't go down at same time if they are in different zones. – Paul May 11 '15 at 17:50
  • 1
    @Paul sure. And if you want to jump through those hoops, do it. It will have development costs. How many hours of development costs to build fail-over systems is worth saving 1$ per hour per "large" CPU cluster? Depends on how many hours of "large" CPU clusters you are renting. – Yakk - Adam Nevraumont May 11 '15 at 18:20
  • @Paul: Thanks for your help, but I already noted in the original post that this is a ulimits problem. The question was: How do I remove this limit? Sorry for tagging the question wrong originally, I removed the C tag. – cgk May 12 '15 at 18:28
  • @Carlton: It is for storing computation intermediates in a scientific application, in quantum chemistry. If you want to get a impression of what this is about: See http://www.iboview.org, which is the program where I saw this originally. – cgk May 12 '15 at 18:30
  • I suspected it was for modeling and simulation. Looks pretty cool, good luck. – Carlton May 12 '15 at 19:14
  • @Paul: actually, making a copy of /etc of a machine on which this works and comparing it with my setup, as you suggested, may be a viable route. I am just a bit scared of it because I have no idea of what to look for, and my knowledge of linux is, unfortunately, rather limited. All my googling only pointed to limits.conf, but this is definitely not where the ulimits come from on my machine. So I am still hoping someone else has another idea of where to look for virtual memory configuration issues. – cgk May 12 '15 at 20:30
  • 1
    @cgk I note that my fresh install of 64 bit Ubuntu 15.04 has unlimited ulimit. Perhaps you need to pull the primary hard drive(s), set them aside, pop another drive in, and reinstall from DVD or a USB stick. I played with the `ulimit` command, and noticed it is **defined by bash**. `ulimit` is not a command executable in the file system, it is in `bash`, and it also has a manpage as a *deprecated* syscall in C.From shell, `help ulimit` Even as root, I could not set a hard limit with `ulimit -H -v [number of kB]`. I could set and undo soft limits as root. Suggest you try that reinstall. – Paul May 12 '15 at 20:51
  • @cgk My old install of Ubuntu 14.04 LTS on a 1st generation i7 also has unlimited `ulimit -v` – Paul May 12 '15 at 20:58
  • @Paul: hm... I am running Fedora. Both the big computer (the one with the 128 GB of RAM), running freshly installed Fedora 21, and my notebook (with 32 GB of RAM), running a fedup'd Fedora 20, return about 32 GB when asked for "ulimit -v". So that must be a Fedora/Red Hat specific "feature". I think will also look for Fedora/Red Hat forums, and return here if I find a solution. Thanks for pointing out that ulimit is a bash call. That is not what I expected. – cgk May 12 '15 at 21:07
  • @cgk your Linux kernel version `3.19.5-200.fc21.x86_64` is newer than the one used in Ubuntu 15.04. I have `3.19.0-16-generic`. Possibly, someone compiled and installed the latest kernel. I wonder, when they did that, if they used settings fully compatible with the amount of memory or system type. The kernel's compile configuration settings might be found in a file called `/boot/config-3.19.5...` but this is not guaranteed. – Paul May 12 '15 at 21:08
  • @cgk I used to use Red Hat/ Pink Tie. Basically, `yum` becomes `apt-get` and I found Ubuntu easier to use. – Paul May 12 '15 at 21:11
  • 1
    I believe the newer POSIX syscalls to replace `ulimit` are `setrlimit`/`getrlimit`. http://pubs.opengroup.org/onlinepubs/9699919799/functions/getrlimit.html. I don't know where their default settings are configured for Fedora, but maybe that's a lead. – Andrew Janke May 12 '15 at 21:12