Firstly, having to downcast in such a situation is a clear sign of bad design breaking the Liskov principle. To fix it in the OOP domain, instead of downcasting you should call a certain virtual function for each your IEntity
object which then would do something different for Node
and something different for Spatial
.
Secondly, if you aim for performance, you should abandon OOP in the performance critical code. I.e. no calls to virtual functions. You should also try to utilize memory caching as much as possible because memory access to non-cached data is usually the bottleneck of most performance critical code. How to do that? Google for "data driven design" or "data oriented programming". This is often implemented using something called "entity component system", typically in computer games. In your case you will probably need to keep separated your Nodes
and Spatials
in separated vectors and not by reference or pointer to IEntity
but by value (use vector<Node>
and vector<Spatial>
instead of vector<IEntity*>
) and then traverse it linearly or in a predictable/regular order (to benefit pro data prefetch). This will give you most speed you can achieve. And probably only after that you should start with some algorithm fine-tuning... This is the typical scenario so I assume this is also your case.
Btw. except for performance (as in your case) there is nothing wrong about storing pointers in STL containers. Or is there?
UPDATE: as you mentioned below, you probably need to keep the objects in a tree structure for some algorithmic reasons. In that case my advice about keeping objects in containers by value might not be that useful or straightforward to implement. Anyway and in any case you must profile our code first and then try to optimize and fiddle with caching etc. However my first note about bad design indicated by the need to downcast is still valid. :)