-1

What is the difference between the two allocations below:

for (I = 0; I < 10000000; i++)
    P = new CMyObject;

And

P = new CMyOjbect[10000000];

Will the first allocation method cause more fragments during allocation and consume more actual memory?

genpfault
  • 51,148
  • 11
  • 85
  • 139
alancc
  • 487
  • 2
  • 24
  • 68
  • The first makes 10000000 new pointers to CMyObject, which are immediately leaked after each iteration ends. The second make a pointer to an array of 10000000 CMyObjects. – Cory Kramer Feb 19 '14 at 21:00
  • 3
    The first one leaks a lot of memory. Both are asking for trouble, though. Just use a vector. – chris Feb 19 '14 at 21:00
  • 1
    You cannot access 9999999 of the allocated `CMyObjects` in the first allocation method – yizzlez Feb 19 '14 at 21:02
  • @awesomeyi I think he knows that, I believe the code is merely an example to make the question clear – Nowayz Feb 19 '14 at 21:09

4 Answers4

3

One of them allocates 10000000 elements independently. The objets could in principle be scattered all over the virtual memory space. The other allocates a single array of 10000000 contiguous elements.

In the first case, you have to call delete in each instance separately (which you can't do, so you have a memory leak.) In the second case, you need to call delete [] on P to de-allocate the whole array.

juanchopanza
  • 223,364
  • 34
  • 402
  • 480
  • This is a good answer but I don't think it really answers the idea of the memory being fragmented if these objects use some other kind of memory allocation after the base structure is created, it's hard to say without knowing what the object is. – Nowayz Feb 19 '14 at 21:08
2

Yes.

The overhead associated with each memory allocation depends on the OS and whether the code is built with or without debugging symbols.

Regardless, there is positive overhead per each allocation. Hence, the overhead of allocating N objects in one call is substantially less than allocating one object each N times, specially when N is 10000000.

Take a look following code:

#include <stdlib.h>
#include <iostream>

struct Object
{
   Object() : i(0) {}
   int i;
};

int N = 1000000;

void test1()
{
   Object* p = new Object[N];
}

void test2()
{
   for (int i = 0; i != N; ++i )
      Object* p = new Object;
}

int main(int argc, char** argv)
{
   int i = atoi(argv[1]);
   if ( i == 1 )
   {
      test1();
   }
   else
   {
      test2();
   }

   std::cout << "Enter a number: ";
   std::cin >> i;
   return 0;
}

Platform: cygwin32, Compiler: g++ without debugging symbols

  Memory used for test1:  4,760K
  Memory used for test2: 16,492K

Platform: Windows 7, 64 bit, Compiler: Visual Studio 2008 without debugging symbols

  Memory used for test1:  4,936K
  Memory used for test2: 16,712K

Platform: Windows 7, 64 bit, Compiler: Visual Studio 2008 with debugging symbols

  Memory used for test1:  5,016K
  Memory used for test2: 48,132K

There's also the extra book keeping that has to be done to make sure that the allocated memory is deallocated. The point of this exercise was to just demonstrate the overhead costs associated with the two ways of allocating memory.

R Sahu
  • 204,454
  • 14
  • 159
  • 270
  • 1
    No it's not, provide a benchmark to substantiate this – Nowayz Feb 19 '14 at 21:06
  • @Nowayz, I just added some preliminary benchmarks. Do you have any reason to believe my conclusions do not apply to some use cases? – R Sahu Feb 19 '14 at 21:35
  • Why the second allocation will consume more memory, since after the first CMyObject is allocated, then second is right allocated after the first CMyObject after one iteration, so I think the second CMyObject is just continousouly allocated after the first, and so for the third one? – alancc Feb 20 '14 at 00:31
  • @user2704265, if overhead for each call to new is X bytes, the second allocation will still require that overhead. Same with the third allocation, the fourth allocation, and so forth. In the first case, the total memory used by the OS is X+sizeof(Object)*N. In the second case, the total memory used by the OS is (X+sizeof(Object))*N. – R Sahu Feb 20 '14 at 02:26
  • I up-voted this because of the good benchmarks but my point remains, the additional memory consumption assuming there is some sort of memory management attempt at all is just that which is required to store the pointers to your objects. Excess memory measured is just a result of the section allocation, or storage of pointers. (malloc cannot alloc less than 4096 bytes after all) – Nowayz Feb 22 '14 at 08:33
  • In the case of 10,000,000 objects - on 32-bit OS this would take 40,000,000Bytes of (4-byte) pointers which is 39,062.5 KB; In addition to whatever slight cost of dynamically allocating a section. I just want to make it clear that the cost of doing this should not be exaggerated, because adding more objects will have very predictable and generally negligible recourse. – Nowayz Feb 22 '14 at 08:40
  • @Nowayz, thanks for the up-vote. The cost of storing pointers is negligible, as you correctly pointed out. The significant cost is associated with what the underlying OS has to do to manage memory. Again, your point about malloc not being able to allocate less than 4096 bytes is very good. When you call malloc 10,000,000 times, the cost, in terms of total memory used, is significant. – R Sahu Feb 22 '14 at 21:41
1

In the first case you are allocating 10000000 objects but you will have only the last one available as you overwrite the previously the allocated objects. ---> Memory leak

In the second case you allocate an array of 10000000 objects. You can delete those with

delete [] P;
INS
  • 10,594
  • 7
  • 58
  • 89
  • Very true (& well spotted) but I'm sure user meant P[I] = new CMyObject; - for the purposes of this question I mean. – Tim Bergel Feb 19 '14 at 21:18
0

Each allocation consumes a bit of time and uses (I would assume, this may be avoidable) a bit of extra memory. So method 1 is certainly going to be slower, will very probably use more memory and would probably cause more fragmentation.

Tim Bergel
  • 499
  • 3
  • 9