Lately, I learned there is a consensus among C++ programmers that the new
, delete
and delete[]
operators should be avoided as often as possible, as already discussed here, here or here. While searching, I even stumbled upon an April Fools' joke stating that these operators would become deprecated in C++20.
I happen to write and maintain a C/C++ program, written in such language in order to carry on useful libraries and classes written by other programmers. As it must run in quite limited environments (i.e., old Linux distributions with the bare minimum in terms of programs), I can't rely on features brought C++11 and later versions (such as smart pointers), and I sticked so far to a mix of C and Java programming habits while expanding my program. Among others, I used quite often dynamic allocation with new
and delete
- which sounds, of course, to be a problem.
To ease the maintenance of my code by future programmer(s), I would like to minimize dynamic allocation with said keywords in my code. The problem is that my program has to manage some quite large data structures used for (almost) the entire execution. As a consequence, I struggle to figure out why I should avoid dynamic allocation in these situations.
To simplify, consider I have a data structure (modeled as an object) worth 10 Megaoctets which is used for the entire execution of the program and which the size in memory can increase over time. My questions are the following:
Is dynamic allocation of the object with
new
still a bad practice in this particular context ? What are the better alternatives ?Suppose now I instantiate somewhere in my program a new object without using
new
, do some operations on it which could slightly change its size, then use a method to insert it in my data structure. How does it work, memory-wise, if automatic allocation (as mentioned here) is used ?
Many thanks in advance.