0

Context: I'm working on a project where a client needs us to use custom dynamic memory allocation instead of allocating objects from the stack. Note that the objects in question have size known during compilation and doesn't even require dynamic allocation. Which makes me wonder,

What are some contexts where custom dynamic memory allocation of objects can be better than allocating objects from the stack? (where size is known during compilation)


An example. If Dog is a class, then instead of just declaring Dog puppy; they want us to do

Dog* puppy = nullptr; 
custom_alloc(puppy);
new(puppy) Dog(); // the constructor
// do stuff
puppy->~Dog(); // the destructor
custom_free(puppy)

The real custom_alloc function is not known to us. To make the program run, the given custom_alloc function would be a wrapper of malloc. And custom_free would be a wrapper of free

I do not like this approach and was wondering when this can be actually useful or what they are really trying to solve by doing this.

  • 1
    one reason is lifetime, the other is the size of stack limited – Oblivion Jul 11 '19 at 19:28
  • Driver dev (stack severily limited). – Michael Chourdakis Jul 11 '19 at 19:29
  • If allocations/de-allocations align with the normal function scope nesting, then by all means use locals, but often that is not the case. – 500 - Internal Server Error Jul 11 '19 at 19:30
  • 3
    You left out the ugliest part of that code; having to manually fire the destructor. Placement-new has its places (no pun intended), but rarely are those places in normal, everyday code. If this is from some microcontroller environment or some other situationally specific edge case, that's another matter. You said *"they want us to do..."* - did you ask *them* (whoever 'they' are) why? Chances are they either (a) have some reason they consider important, or (b) no one has a clue but that's just the way it's always been for the people still there. – WhozCraig Jul 11 '19 at 19:31
  • This is a general-purpose CLI application. Typically not run on a microcontroller. But could be a possible reason. –  Jul 11 '19 at 19:43
  • @500-InternalServerError The allocs and deallocs do happen in a normal function scope. Nothing too complex. –  Jul 11 '19 at 19:54
  • 1
    I'm wondering why your question is about "stack vs custom allocator" and not "stack vs heap" or "default allocator vs custom allocator"? It seems that there's a missing link in this question which is a default dynamic allocator used by `new/delete`. What I mean is: are you asking "why use custom dynamic allocator" or "why use dynamic allocation at all instead of stack"? – r3mus n0x Jul 11 '19 at 19:58
  • Neither of those. My question is about replacing an object from stack into an objective allocated dynamically by a customer allocator. Is this useful in any cases? –  Jul 11 '19 at 20:26

1 Answers1

0

Possible reasons:

  1. Stack size is limited; while typical thread libraries allocate 1-10 MB for each thread's stack, it's not uncommon for the limit to be set lower for applications where hundreds or thousands of threads are expected to be launched concurrently (e.g. high traffic webservers; Microsoft IIS used to use a 256 KB limit, and only upped it to 512 KB for 64 bit setups).

  2. You may want to keep an object around after the function has returned (without using globals). While NRVO and/or move semantics does mean it's often relatively cheap to return the object by value, when NRVO doesn't apply, copying around a single pointer is cheaper than just about anything else.

  3. Auditing/tracing: They may want to use their custom function for specific types to keep track of memory allocation patterns

  4. Persistent storage: The allocator may be backed by a memory mapped file; for structured data, that file may double as long term storage

  5. Performance: Custom allocators (e.g. Intel's TBB) have been known to dramatically reduce runtime in certain circumstances. This is more a justification for using a custom allocator instead of the default allocator; custom allocators generally won't beat stack storage (except in really niche cases where memory locality might be improved by removing large objects from the stack and putting them in their own dedicated storage).

  6. (Likely a terrible idea) Avoiding exception handling cleanup overhead. If your classes are RAII, then code has to be generated to clean them up along various code paths in case of an exception. Raw pointers don't generate any such code. Of course, if you don't take measures to perform the cleanup on exception yourself, this means memory leaks, but in rare cases (e.g. when you expect the program to exit completely, and you want the OS to handle memory cleanup) this might provide a minor "benefit".

  7. A combination of the above: They may want to be able to swap between a tracing allocator and a performance allocator by linking different runtime libraries to provide custom_alloc

All that said, their approach to do this is pretty awful; requiring manual placement new and destructor invocation is unpleasant (std::unique_ptr/std::shared_ptr could help a bit by providing custom deleter functors that do this work for you, but it's ugly even so). Typically if you need a custom allocator, you'd define appropriate overloads for operator new/operator delete. That way, avoiding stack allocation (for whatever reason) isn't nearly so unpleasant; you just replace logically stack allocated variables with std::unique_ptrs (created via std::make_unique), and your code remains fairly simple.

ShadowRanger
  • 143,180
  • 12
  • 188
  • 271