So my code is using a tr1::unordered_map<int, ContainerA>
where ContainerA is something like this :
class ContainerA
{
int* _M_type;
unsigned int _M_check;
int _M_test;
unsigned int _M_any;
index _M_index;
index _M_newindex;
objectB _M_cells[10];
vector<int> _M_mapping[10];
}
The number of entries in my unordered_map is of the order of millions and at every timestep, tens of thousands (if not hundreds of thousands) of entries get deleted and added. It seems like much of my computing time is used up allocating individually each instance of the ContainerA class.
A solution I propose for speeding things up is to dynamically allocate an estimated number of ContainerA objects needed at the start, and replacing the unordered_map to ContainerA objects by an unordered_map to pointers tr1::unordered_map<int, ContainerA*>
.
This seems like a good idea to me, however after doing some research, it does not seem to me like this approach is common which is why I am wondering if I am missing something.
Are there any drawbacks to this approach apart from the additional memory overhead of the pointers?
Edit :
As pointed out in the comments below, the problem might lie in my memory allocation. I allocate my memory as follows :
I construct a ContainerA
object using its constructor ContainerA obja(int* type, int test)
. In this phase, the ContainerA
constructor lets the objectB
array and the vector<int>
array get default constructed.
After constructing my ContainerA
, all my vector<int>
and objectB
need to allocate some memory on the heap. This is probably the time-consuming phase.
It would probably be possible to avoid deallocating and reallocating memory without using an unordered_map
to pointers, but by simply swapping memory contents as suggested. However, I would need to have some sort of way of keeping track of which objects are not used anymore and that sounds somehow more tedious than switching to a pointer array?