C++ is strongly performance oriented. So as long as there is some use case for that you can gain performance, C++ will allow you to do it. Consider std::vector
: Sure, there is the safe element access via function at
, which does range checking for you. But if you know that your indices are in range (e. g. in a for loop), these range checks are just dead weight. So you additionally get the (less safe) operator[]
which just omits these checks.
Similarly, if you have a pointer of type Base
, it could, in reality, point to an object of type Derived
. If in doubt, you would dynamic_cast
from Base*
to Derived*
. But this comes with some overhead. But if you know 100% for sure (by whatever means...) what the sub class actually is, would you want this overhead? As there is a natural (even implicit!) way from Derived*
to Base*
, we want to have some low-cost way back.
On the other hand, there is no such natural cast between pointers of totally unrelated types (such as char
and int
or two unrelated classes) and thus no such low-cost way back (compared to dynamic_cast
, which isn't available either, of course). Only way to transform in between is reinterpret_cast
.
Actually, reinterpret_cast comes with no cost either, it just interprets the pointer as a different type – with all risks! And a reinterpret_cast
even can fail, if instead a static_cast
would have been required (right to prevent the question "why not just always use ..."):
class A { int a; };
class B { };
class C : public A, public B { };
B* b = new C();
C* c = reinterpret_cast<C*>(b); // FAILING!!!
From view of memory layout, C
looks like this (even if hidden away from you):
class C
{
A baseA;
B baseB; // this is what pointer b will point to!
};
Obviously, we'll get an offset when casting between C*
and B*
(either direction), which is considered by both static_cast
and dynamic_cast
, but not by reinterpret_cast
...