2

Why does creating a 1D array larger than my memory fail, but I can create a 2D array larger than my memory? I thought the OS gives you virtual memory and you can request as much as you want. It's not until you start reading from and write to memory and it becomes part of the resident set that the hardware constraints become an issue.

On the small VM with 512MB of memory I tried:

1, 512 MB array: no issue
1, 768 MB array: no issue
1, 879 MB array: no issue
1, 880 MB array: fails
1, 1024 MB array: fails
1000, 512MB arrays no issue (at this point
    I've allocated 256GB of virtual memory,
    well exceeding the physical limits)

On a large VM with 8GB of memory, all the above worked.

For this experiment, I used this code:

#include <stdio.h>      /* printf */
#include <stdlib.h>     /* atoi */
#include <iostream>
#include <unistd.h>

int main(int argc, char *argv[],char **envp) {
    if(argc < 3) {
        printf("main <mb> <times>\n");
        return -1;
    }

    int megabytes = atoi(argv[1]);
    int times = atoi(argv[1]);

    // megabytes    1024 kilobytes      1024 bytes          1 integer
    // --------   * ---------        *  ----------    *     --------
    //              megabyte            kilobyte            4 bytes
    int sizeOfArray = megabytes*1024*1024/sizeof(int);
    long long bytes = megabytes*1024*1024;
    printf("grabbing memory :%dmb, arrayEntrySize:%d, times:%d bytes:%lld\n",
                    megabytes, sizeOfArray, times, bytes);

    int ** array = new int*[times];
    for( int i = 0; i < times; i++) {
        array[i] = new int[sizeOfArray];
    }

    while(true) {
        // 1 second to microseconds
        usleep(1*1000000);
    }

    for( int i = 0; i < times; i++) {
        delete [] array[i];
    }
    delete [] array;
}

Commands and outputs of experiments on small 512MB VM:

free -h
              total        used        free      shared  buff/cache   available
Mem:           488M         66M         17M        5.6M        404M        381M
Swap:          511M         72K        511M

./a.out 512 1
grabbing memory :512mb, arrayEntrySize:134217728, times:512 bytes:536870912
./a.out 768 1
grabbing memory :768mb, arrayEntrySize:201326592, times:768 bytes:805306368

./a.out 1024 1
grabbing memory :1024mb, arrayEntrySize:268435456, times:1024 bytes:1073741824
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted (core dumped)

./a.out 512 1000
grabbing memory :512mb, arrayEntrySize:134217728, times:512 bytes:536870912
#htop
  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
 2768 root      20   0  256G  4912  2764 S  0.0  1.0  0:00.00 ./a.out 512 1000

Commands and outputs of experiments on the large 8GB VM:

free -h
              total        used        free      shared      buff/cache   available
Mem:           7.8G         78M        7.6G        8.8M        159M        7.5G
Swap:          511M          0B        511M

./a.out 512 1
grabbing memory :512mb, arrayEntrySize:134217728, times:512 bytes:536870912
./a.out 768 1
grabbing memory :768mb, arrayEntrySize:201326592, times:768 bytes:805306368
./a.out 1024 1
grabbing memory :1024mb, arrayEntrySize:268435456, times:1024 bytes:1073741824
./a.out 512 1000 
grabbing memory :512mb, arrayEntrySize:134217728, times:512 bytes:536870912
# htop
  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
 1292 root      20   0  256G  6920  2720 S  0.0  0.1  0:00.00 ./a.out 512 1000
joseph
  • 2,429
  • 1
  • 22
  • 43
  • 2
    new needs to have the allocated memory in a contiguos range - you migth not have a free block big enough. While an array of 'n' small blocks can have each small block in a different place – Martin Beckett Jul 27 '18 at 16:27
  • 1
    *"I thought the OS gives you virtual memory and you can request as much as you want."* Well, depends on how the OS has been configured. What is the value of `/proc/sys/vm/overcommit_memory`? – eerorika Jul 27 '18 at 16:33
  • cat /proc/sys/vm/overcommit_memory writes 0 to standard out – joseph Jul 27 '18 at 16:43
  • 1
    "_While an array of 'n' small blocks can have each small block in a different place_" : I only have 512MB available, so `new` definitely can't find a small block for all 1,000 new requests. So I don't think that mapping of virtual to physical happened yet. – joseph Jul 27 '18 at 16:47
  • @joseph how does the program behave after `echo 1 > /proc/sys/vm/overcommit_memory`? – eerorika Jul 27 '18 at 20:29

1 Answers1

1

This is due to the fact thay memory is allocated in a chunk based on how much you ask for.

You are asking for a sereis of relatively small blocks in the 2D array, each block isnt necessarily next to each other.

However the 1D array is massive and requires a full sized and contigiois memory block, and you may not have a block of that size available even if you have thay much memory available.

  • 1
    This answer was already brought up in the comments. This is not the answer because I only have 512MB available, so new definitely can't find a small block for all 1,000 new requests because it in total it ends up using 256GB!!! So I don't think that mapping of virtual to physical happened yet. – joseph Jul 27 '18 at 18:21
  • I see then. Ill modify my answer once i come up with something else. – lordseanington Jul 27 '18 at 18:44
  • @joseph Do you have over commit enabled ? see https://stackoverflow.com/questions/19148296/linux-memory-overcommit-details – Martin Beckett Jul 28 '18 at 00:21
  • @MartinBeckett : overcommit is 0 – joseph Jul 30 '18 at 14:42