It would be efficient for some purposes to allocate a huge amount of virtual space, and page in only pages that are accessed. Allocating a large amount of memory is instantaneous and does not actually grab pages:
char* p = new char[1024*1024*1024*256];
Ok, the above was wrong as pointed out because it's a 32 bit number.
I expect that new is calling malloc which calls sbrk, and that when I access a location 4Gb beyond the start, it tries to extend the task memory by that much?
Here is the full program:
#include <cstdint>
int main() {
constexpr uint64_t GB = 1ULL << 30;
char* p = new char[256*GB]; // allocate large block of virtual space
p[0] = 1;
p[1000000000] = 1;
p[2000000000] = 1;
}
Now, I get bad_alloc when attempting to allocate the huge amount, so obviously malloc won't work.
I was under the impression that mmap would map to files, but since this is suggested I am looking into it.
Ok, so mmap seems to support allocation of big areas of virtual memory, but it requires a file descriptor. Creating huge in-memory data structures could be a win but not if they have to be backed by a file:
The following code uses mmap even though I don't like the idea of attaching to a file. I did not know what number to put in to request in virtual memory, and picked 0x800000000. mmap returns -1, so obviously I'm doing something wrong:
#include <cstdint>
#include <unistd.h>
#include <fcntl.h>
#include <sys/mman.h>
int main() {
constexpr uint64_t GB = 1ULL << 30;
void *addr = (void*)0x8000000000ULL;
int fd = creat("garbagefile.dat", 0660);
char* p = (char*)mmap(addr, 256*GB, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
p[0] = 1;
p[1000000000] = 1;
p[2000000000] = 1;
close(fd);
}
Is there any way to allocate a big chunk of virtual memory and access pages sparsely, or is this not doable?