I am converting a bunch of code over to use C++-style casts (with the help of -Wold-style-cast
). I'm not entirely sold on its use for primitive variables, but I'm new to C++-style casts in general.
One issue occurs in some endian converting code. The current code looks like this:
#define REINTERPRET_VARIABLE(VAR,TYPE) (*((TYPE*)(&VAR)))
//...
uint16_t reverse(uint16_t val) { /*stuff to reverse uint16_t*/ }
int16_t reverse( int16_t val) {
uint16_t temp = reverse(REINTERPRET_VARIABLE(val,uint16_t));
return REINTERPRET_VARIABLE(temp,int16_t);
}
Now, endianness doesn't care about signedness. Therefore, to reverse an int16_t
, we can treat it exactly like a uint16_t
for the purposes of the reversal. This suggests code like this:
int16_t reverse( int16_t val) {
return reinterpret_cast<int16_t>(reverse(reinterpret_cast<uint16_t>(val)));
}
However, as described in this and in particular this question, reinterpret_cast
requires a reference or a pointer (unless it's casting to itself). This suggests:
int16_t reverse( int16_t val) {
return reinterpret_cast<int16_t&>(reverse(reinterpret_cast<uint16_t&>(val)));
}
This doesn't work because, as my compiler tells me, the outside cast wants an lvalue. To fix this, you'd need to do something like:
int16_t reverse( int16_t val) {
uint16_t temp = reverse(reinterpret_cast<uint16_t&>(val));
return reinterpret_cast<int16_t&>(temp);
}
This is not much different from the original code, and indeed the temporary variable exists for the same reason, but four questions were raised for me:
- Why is a temporary even necessary for a
reinterpret_cast
? I can understand a dumb compiler's needing to have a temporary to support the pointer nastiness ofREINTERPRET_VARIABLE
, butreinterpret_cast
is supposed to just reinterpret bits. Is this clashing with RVO or something? - Will requiring that temporary incur a performance penalty, or is it likely that the compiler can figure out that the temporary really should just be the return value?
- The second
reinterpret_cast
looks like it's returning a reference. Since the function return value isn't a reference, I'm pretty sure this is okay; the return value will be a copy, not a reference. However, I would still like to know what casting to a reference really even means? It is appropriate in this case, right? - Are there any other performance implications I should be aware of? I'd guess that
reinterpret_cast
would be, if anything, faster since the compiler doesn't need to figure out that the bits should be reinterpreted--I just tell it that they should?