Consider a class like
class Element{
public:
// approx. size of ~ 2000 Byte
BigStruct aLargeMember;
// some method, which does not depend on aLargeMember
void someMethod();
}
Now assume that many instances (e.g., 100,000,000 instances during runtime, with approx 50,000 instances existing at the same time) of Element are created during runtime, and very often only someMethod()
is called, without any necessity of allocating memory for aLargeMember
.
(This illustrative example is derived from a nonlinear finite element code, class Element
actually represents a finite element.)
Now my question: Since aLargeMember
is not required very often, and considering the large number of instances of Element
, would it be advantageous to create aLargeMember
dynamically? For instance
class Element{
public:
// approx. size of ~ 2000 Byte
std::unique_ptr<BigStruct> aLargeMember;
// only called when aLargeMember is needed
void initializeALargeMember{
aLargeMember = std::unique_ptr<BigStruct>( new BigStruct() );}
// some method, which does not depend on aLargeMember
void someMethod();
}
Basically, this corresponds recommendation 4 given in https://stackoverflow.com/a/36646563/4859499:
Only use new if there's a clear need, such as:
an especially large allocation that would eat up much of the stack (your OS/process will have "negotiated" a limit, usually in the 1-8+ megabyte range)
- if this is the only reason you're using dynamic allocation, and you do want the lifetime of the object tied to a scope in your function, you should use a local std::unique_ptr<> to manage the dynamic memory, and ensure it is released no matter how you leave the scope: by return, throw, break etc.. (You may also use a std::unique_ptr<> data member in a class/struct to manage any memory the object owns.)
So, my question is: Is the heap approach in the present case considered as bad practice? Or are there any good arguments against the heap in the present case? Thank you in advance!