-2

I have the following code:

int main() {
  int N = 1000000;
  int D = 1;

  double **POINTS = new double * [N];
  for (unsigned i=0;i<=N-1;i++) POINTS[i] = new double [D];
  for (unsigned j=0;j<=D-1;j++) POINTS[0][j] = 0;

  for (int i = 0; i < N; ++i)
  {
    for (int j = 0; j < D; ++j)
    {
        POINTS[i][j] = 3.14;
    }
  }
}

If the size of each pointer is 8 and N = 10^6 and D = 1, it is expected that size of POINTS must be 8 * 10^6 * 1 / 1000 / 1000 = 8 mb but in fact this program eats 42 mb of memory. If N = 2 * 10^6 it is expected 16 mb but actually 84. Why?

danielleontiev
  • 869
  • 6
  • 26
  • why do you think that the only memory *eaten* by the program are pointers? you are using `new` with sizes > 0, so there **for sure** is more memory used than simply for declaring pointers – Fureeish Nov 12 '17 at 22:11
  • @Fureeish, how to correctly calculate the size of `POINTS`? – danielleontiev Nov 12 '17 at 22:13
  • 1
    @danielleontiev you can't, because there is extra overhead involved on each `new` by the memory manager, and you don't have access to know what that overhead actually is. And BTW, your calculation is wrong. The total size you are allocating is `(sizeof(double*) * 10^6) + (sizeof(double) * 10^6)` = 15MB + overhead – Remy Lebeau Nov 12 '17 at 22:14
  • You basically are just defining double POINTS[N][D]; the size should be 8.6 MB for that structure. What do you mean by “size” in this context? If you are talking about program size it could be many things esp if you use -g option – Philip Brack Nov 12 '17 at 22:16
  • @danielleontiev -- 8 bytes for the pointer, but where in your calculation do you include `sizeof(double)`, since the system has to find a place for the double? Let's say it isn't a `double`, but something like this: `struct foo { int x[100]; };`. – PaulMcKenzie Nov 12 '17 at 22:16
  • @PhilipBrack I mean the memory usage which I can see in the output of `top` command – danielleontiev Nov 12 '17 at 22:18
  • The top will include the allocated heap and stack. That is configured by the runtime initially and not your program code – Philip Brack Nov 12 '17 at 22:19
  • @danielleontiev -- Do things [this way](https://stackoverflow.com/questions/21943621/how-to-create-a-contiguous-2d-array-in-c/21944048#21944048) instead of creating all of that potential memory fragmentation. What do your results yield if you allocate things this way? – PaulMcKenzie Nov 12 '17 at 22:25
  • @PaulMcKenzie, I have to approximately estimate the usage of memory by very old C++ program without possibility of changing it, so, I cannot use your variant of creation, but thanks for advise – danielleontiev Nov 12 '17 at 22:30

1 Answers1

3

There are lots of possible reasons:

  • Every memory allocation probably comes with some overhead so the memory manager can keep track of things. Lots of small allocations (like you have) mean you probably have more tied up in overhead than you do in data.
  • Memory normally comes in "pages". If you dynamically allocate 1 byte then your size likely grows by the size of 1 page. (The first time - not every 1 byte allocation will get you a whole new page)
  • Objects may have padding applied. If you allocate one byte it probably gets padded out to "word size" and so you use more than you think
  • As you allocate and free objects you can create "holes" (fragmentation). You want 8 bytes but there is only a 4 byte hole in this page? You'll get a whole new page.

In short, there is no simple way to explain why your program is using more memory than you think it should, and unless you are having problems you probably shouldn't care. If you are having problems "valgrind" and similar tools will help you find them.

Last point: dynamically allocated 2d arrays are one of the easiest ways to create the problems mentioned above.

John3136
  • 28,809
  • 4
  • 51
  • 69