-4

Lately I have seen some speed differences between C and C++ and it's quite unpredictable sometimes to say which one of them is faster in one situation or the other.

I know that STL has simplified a lot since its introduction but does it simplify stuff at speed / memory cost?

For example, there are various ways to define stacks / queues / binary trees / graphs etc using struct with pointers. These implementations are a bit more fiddly, though. Another way to do all this is simply use a vector from STL that has the capacity of increasing and decreasing in size at will using templates. There are also many templates implemented for maps, queues, etc.

My question is, which implementation do you think is more efficient in terms of time and memory complexity and why?

Mihai Jiplea
  • 11
  • 1
  • 2
  • 3
    You are going to have to be a _lot_ more specific. – Nemo Dec 27 '12 at 17:05
  • 3
    It's impossible to answer this. It depends on the implementation and the data structure in question. But on most mainstream implementations, STL data structures are very efficient. There's some room for debate (for example, std::vector might sometimes not perform as well as a custom C dynamic array which uses `realloc` for resizing), but in general it's pretty hard to beat the STL in terms of efficiency. – Charles Salvia Dec 27 '12 at 17:08
  • Stdlib data structures vs "Roll Your Own" is what I read : not quite a duplicate but - http://stackoverflow.com/questions/2253690/what-makes-stl-fast – Caribou Dec 27 '12 at 17:09
  • Sounds like homework. Have a code sample? – brian beuning Dec 27 '12 at 17:29

2 Answers2

2

The C++ standard library has been developed, tweaked for performance, and debugged over years, and with compiler optimizations improving you can get really good performance out of the box from standard containers. Probably the biggest performance bottleneck is going to the heap to get memory, which can be reduced in some cases such as reserve for vectors.

If you're writing C++ just use all the containers and algorithms available to you, as dictated by the needs of your software. Then if you have performance problems you can profile your code (and most likely the bottleneck still won't be the standard library).

Mark B
  • 95,107
  • 10
  • 109
  • 188
0

The key here is of course that if you use "intrusive" solutions, that is, you store the links for a linked list inside each node, vs. having a separate node structure that holds a pointer/reference to some other location that holds the data itself, there will be some performance impact [and that can work in both directions - not necessarily that the intrusive solution is always best].

The TYPICAL case where people get it wrong is when they compare storing an allocated object in a container of some sort, and then comparing that to a solution that stores a copy of the object in the container. If that container is one that needs to "grow" by re-allocating, then it will cause performance problems. But that's just "wrong choice if thing to store" or "wrong kind of container for this type of object storage".

For most cases, the implementation of HOW you do something (e.g. is it a tree or a vector) is where you get the 100x improvement. The write something specific for this type of storage will give you 5-20% improvement, depending on how well the original code and how well the optimized code is written. Now, assuming it's not part of the inner details of your rendering loop in a game, or the sheduler or memory allocation code in the OS kernel, 5% is not worth having, and for the vast majority of code, 20% improvement is probably not much value either [for one part of the system functionality - if you are storing stuff in a container, hopefully you do more with it than just plain store it and work your way back and forth over the container content, right?]

Before spending any time optimizing code, there should be some decent analysis of where the code is spending the time. Then you can attack making it better. The first step in that is to look at "What is the behaviour of the algorithm here?" Are we spending O(2^n) on sorting something, when we could use a different method that is much faster? Are we spending time inserting/removing things from a large array, when a list, stack or queue would perhaps be a better choice? Are we writing small chunks of data to disk, when we could write a smaller number of large blocks?

These sort of questions will give you the 5, 10, 100x improvements that make REAL difference. Of course, shaving 5% of something is worth doing. Sometimes shaving 0.5% off is worth doing, if it's what makes your game run at 200fps instead of 100fps, because it is that half a percent that trips you over the frame switch rate. But in general, if it doesn't at least double the speed, you probably should spend the time elsewhere.

Mats Petersson
  • 126,704
  • 14
  • 140
  • 227