0

Is there anything wrong with the optimization of overloading the global operator new to round up all allocations to the next power of two? Theoretically, this would lower fragmentation at the cost of higher worst-case memory consumption, but does the OS already have redundant behavior with this technique, or does it do its best to conserve memory?

Basically, given that memory usage isn't as much of an issue as performance, should I do this?

trincot
  • 317,000
  • 35
  • 244
  • 286
Clark Gaebel
  • 17,280
  • 20
  • 66
  • 93
  • Just making sizes a power of 2 doesn't force `new` to use buddy-system allocation, so you'll still get fragmentation. Also, when `new` allocates a block, I think it actually allocates several bytes at negative offsets to the pointer you get, where it tucks its own information, depending on debug. BTW, buddy-system is not especially efficient in either speed or memory. – Mike Dunlavey Jun 01 '10 at 21:33

4 Answers4

6

The default memory allocator is probably quite smart and will deal well with large numbers of small to medium sized objects, as this is the most common case. For all allocators, the number of bytes requested is never always the amount allocated. For example, if you say:

char * p = new char[3];

the allocator almost certainly does something like:

char * p = new char[16];   // or some minimum power of 2 block size

Unless you can demonstrate that you have an actual problem with allocations, you should not consider writing your own version of new.

  • Some memory allocators have different pools of various sizes. For small allocations a pool of small width blocks would be more efficient than the pool of large blocks. – Thomas Matthews Jun 01 '10 at 20:15
4

You should try implementing it for fun. As soon as it works, throw it away.

fredoverflow
  • 256,549
  • 94
  • 388
  • 662
3

Should you do this? No.

Two reasons:

  • Overloading the global new operator will inevitably cause you pain, especially when external libraries take dependency on the stock versions.
  • Modern OS implementation of the heap already take fragmentation into consideration. If you're on Windows, you can look into "Low Fragmentation Heap" if you have a special need.

To summarize, don't mess with it unless you can prove (by profiling) that it is a problem to begin with. Don't optimize pre-maturely.

Alienfluid
  • 326
  • 1
  • 3
  • 11
0

I agree with Neil, Alienfluid and Fredoverflow that in most cases you don't want to write your own memory allocator, but I still wrote my own memory allocator about 15 years and refined it over the years (first version was with malloc/free redefinition, later versions using global new/delete operators) and in my experience, the advantages can be enormous:

  • Memory leak tracing can be built in your application. No need to run external applications that slow down your applications.
  • If you implement different strategies, you sometimes find difficult problems just switching to a different memory allocation strategy
  • To find difficult memory-related bugs, you can easily add logging to your memory allocator and even further refine it (e.g. log all news and deletes for memory of size N bytes)
  • You can use page-allocation strategies, where you allocate a complete 4KB page and set the page size so that buffer overflows are caught immediately
  • You can add logic to delete to print out if memory is freed twice
  • It's easy to add a red zone to memory allocations (a checksum before the allocated memory and one after the allocated memory) to find buffer overflows/underflows more quickly
  • ...
Patrick
  • 23,217
  • 12
  • 67
  • 130
  • Each one of the things you mentioned (except for maybe the second point) can be achieved using Application Verifier (http://www.microsoft.com/downloads/details.aspx?familyid=c4a25ab9-649d-4a1b-b4a7-c9d8b095df18&displaylang=en) or UMDH (http://msdn.microsoft.com/en-us/library/ff558947(VS.85).aspx). Seriously, writing your own heap manager is really not worth the risk in terms of increased code size, increased security attack surface, increased risk of bugs etc. – Alienfluid Jun 01 '10 at 20:44
  • I tried all the Microsoft utilities (Application Verifier, UMDH, Gflags, ...) and I couldn't achieve the same results as with my own memory allocator. For example, the Microsoft tools can only take memory snapshots and then compare two snapshots, and not report the leaks at the end of the application including the call stack, as I can do. Additional factors to take into account: my applications can take easily 500MB memory, up to several GB of memory (in a 64-bit env). Working with memory snapshots is then simply impossible. – Patrick Jun 01 '10 at 20:55
  • I'm using gcc, and what you're saying is done by Valgrind. I think it was a fun little project overloading the global new/delete, but impractical. Therefore, I'll go with trusting the system allocator. – Clark Gaebel Jun 01 '10 at 21:25
  • Patrick - Application Verifier will trace all allocations (include stack trace for alloc/free) and break into the debugger if any memory was either not freed or double freed on application exit. Give it a shot again :). – Alienfluid Jun 01 '10 at 21:59
  • @Alienfluid, I tried Application Verifier, but I couldn't get it to report memory leaks. I posted a question (http://stackoverflow.com/questions/2955858/how-to-use-application-verifier-to-find-memory-leaks) but I got no real answer (only answers to use other leak-finding mechanisms, or not to write leaks at all). Tips? – Patrick Jun 03 '10 at 06:44