5

I was messing with malloc calls, and I wondered how much memory could my OS give me. I tried:

int main() {
    char *String = 0;
    String = malloc (100000000000000); // This is 10^14
    if (String)
        printf ("Alloc success !\n");
    else
        printf ("Alloc failed !\n");
    return 0;
}

And... It worked. 10^14 is roughly 18 Terabytes. Is it even possible for a laptop to have so much memory? If that's not possible how can this be explained?

Barmar
  • 741,623
  • 53
  • 500
  • 612
Jenkinx
  • 103
  • 5
  • 7
    It's virtual memory, not real memory. – Barmar Jun 26 '17 at 19:21
  • 2
    And just for experiment, try writing it. – Eugene Sh. Jun 26 '17 at 19:21
  • 3
    Google "optimistic memory allocation". – Barmar Jun 26 '17 at 19:22
  • `String = malloc (100000000000000);` -- try -->> `String = malloc (100000000000000ull);` or `String = calloc (1000000, 100000000);` , if you like tea ... – wildplasser Jun 26 '17 at 19:22
  • What does this imply ? – Jenkinx Jun 26 '17 at 19:23
  • 1
    It means you're headed for a crash when you try to use the space. For some reason, Linux thinks it is fun to let you allocate more space than your program can really have. – Jonathan Leffler Jun 26 '17 at 19:25
  • 1
    An OS might serve the allocation request by virtual memory that is not actually backed by physical memory. Means, your `malloc()` might succeed, but actually *using* that memory fails. – DevSolar Jun 26 '17 at 19:25
  • @DevSolar I see, thanks ! – Jenkinx Jun 26 '17 at 19:26
  • @wildplasser `calloc` would actually _use_ the memory to write zeroes to it, so it's not the same. – Jean-François Fabre Jun 26 '17 at 19:27
  • @Jean-FrançoisFabre :yes, that was my intention. Time for a nice cup of tea! – wildplasser Jun 26 '17 at 19:28
  • 1
    @Jean-FrançoisFabre A system can discard zeroed pages though. – David Schwartz Jun 26 '17 at 19:29
  • @wildplasser i tried the 100000000000ull and it also worked. – Jenkinx Jun 26 '17 at 19:29
  • @DavidSchwartz I think that if the pages are not dirtied,they just keep the COW-bit. – wildplasser Jun 26 '17 at 19:31
  • @Jean-FrançoisFabre `calloc` wouldn't try to write anything. It would likewise request fresh virtual memory with (nonexistent) mappings to pages filled with zeroes. – Antti Haapala -- Слава Україні Jun 26 '17 at 19:32
  • @wildplasser: Adding a suffix shouldn't make any difference. An integer constant is always of a type big enough to hold its value. `100000000000000` will typically be of type `long` or `long long`, and will be converted to `size_t` when passed to `malloc` (assuming a proper declaration of `malloc` is visible, which is not the case in the OP's code). – Keith Thompson Jun 26 '17 at 19:37
  • My bad. I am still in the "all integer literals are int" larval stage. – wildplasser Jun 26 '17 at 19:46
  • @DevSolar Just to be clear: modern systems with paging could write all 18TB in principle if the hard drive was large enough. The limitation isn't physical memory, it's the size of the backing store. Individual pages of the virtual address space will be backed on-demand when necessary- first in physical memory, then swapped to the backing store when physical memory is full. – David Jun 26 '17 at 21:34

1 Answers1

7

A 64-bit OS can generate massive amounts of address space. What would limit it?

Backing of address space with physical memory (RAM) is only done when needed.

All the malloc call has to do is return an address. That address need not refer to physical memory until you try to read from it or write to it.

The downside of this behavior is that failing the malloc call is the usually the implementation's only chance to tell you nicely that you can't have the memory you are asking for. After this, about all the system can do is terminate the process when it tries to use more memory than the system can back.

Your implementation almost certainly gives you some way to control this behavior either at system level, for each process, or both.

David Schwartz
  • 179,497
  • 17
  • 214
  • 278
  • 1
    Sometimes it is not even backed... – Eugene Sh. Jun 26 '17 at 19:23
  • So, if i understand correctly, with the malloc call, I only get address to write to. But if i were to actually write to these location, I will not be able to write to all of them because i won't have actual memory left, right ? – Jenkinx Jun 26 '17 at 19:25
  • @Jenkinx Right. The system has overcommitted. This behavior is configurable on many systems. My personal preference, after decades of experience, is to allocate sufficient paging/swap space and disable overcommittment. That isn't always possible though. – David Schwartz Jun 26 '17 at 19:28
  • It is important to understand, @Jenkinx, that you're asking about implementation details. C does not have a concept of successfully allocating a block of memory that you might not actually be able to use fully. But yes, some implementations do exhibit that behavior under some circumstances. – John Bollinger Jun 26 '17 at 19:30
  • 1
    This answer would benefit from briefly mentioning virtual address space and resident memory. – dlasalle Jun 26 '17 at 19:30