I have created the Virtual Ram class to boost the memory allocation during runtime.
class VRam
{
unsigned char ***data;
...
public:
VRam(ulli length);
void *allocate(ulli size);
...
}
In my program small arrays or variables are allocated in the memory in groups, which allows me to allocate a huge array at the beginning and just use some parts of it later on then clear the whole thing. The problem is the following: I have recently updated it to support huge arrays by splitting them into smaller blocks. My program worked perfectly after this update, but even when I set the memory limit to 2 GiB (this amount of space is allocated at the beginning), the system monitor showed the it only used 50 MiB which is strange. After some testing:
ulli size = 2073741824;
int blocknum = size / VBLOCK_SIZE;
VRam *vr = new VRam(size);
int **arrs = new int*[blocknum];
for (int i = 0; i < blocknum; ++i) {
//cout<<i<<endl;
arrs[i] = (int*)vr->allocate(VBLOCK_SIZE);
arrs[i][0] = i; //<<<<<---------------
for (int j = 0; j < VBLOCK_SIZE / 4; ++j) {
//cout<<" "<<j<<endl;
arrs[i][j] = j+i; //<<<<-------------
}
}
I got the following results: If I construct everything on the allocated memory, it will really be allocated, (the second <<<----) and it uses the specified amount of space. If I don't use don't construct everything (nothing or the first couple of bytes: first <<<---), then the memory won't be used. I can specify 16 GiBs on my laptop with 8 GiG Ram and it is still working. My problem with this is that, as it seems it is some kind of memory optimization, but the reason I created this class is to boost the CPU using memory. Does this cause any problems in the speed? If so how can I get around it without writing to every byte in the memory? I am compiling on Ubuntu 16.04 LTS with qmake (Qt 5.5.1 GCC 5.2.1 20151129, 64 bit).