I would like to say that "converting between types" is what we should be looking at, not whether there is a cast or not. For example
int a = 10;
float b = a;
will be the same as :
int a = 10;
float b = (float)a;
This also applies to changing the size of a type, e.g.
char c = 'a';
int b = c;
this will "extend c
into an int
size from a single byte [using byte in the C sense, not 8-bit sense]", which will potentially add an extra instruction (or extra clockcycle(s) to the instruction used) above and beyond the datamovement itself.
Note that sometimes these conversions aren't at all obvious. On x86-64, a typical example is using int
instead of unsigned int
for indices in arrays. Since pointers are 64-bit, the index needs to be converted to 64-bit. In the case of an unsigned, that's trivial - just use the 64-bit version of the register the value is already in, since a 32-bit load operation will zero-fill the top part of the register. But if you have an int
, it could be negative. So the compiler will have to use the "sign extend this to 64 bits" instruction. This is typically not an issue where the index is calculated based on a fixed loop and all values are positive, but if you call a function where it is not clear if the parameter is positive or negative, the compiler will definitely have to extend the value. Likewise if a function returns a value that is used as an index.
However, any reasonably competent compiler will not mindlessly add instructions to convert something from its own type to itself (possibly if optimization is turned off, it may do - but minimal optimization should see that "we're converting from type X to type X, that doesn't mean anything, lets take it away").
So, in short, the above example is does not add any extra penalty, but there are certainly cases where "converting data from one type to another does add extra instructions and/or clockcycles to the code".