It came to my attention that for simple operations on 8bit variables, the variables get converted to 32bit int
s before the operation is done, and then converted back to 8bit variables.
As an example, this c++ program:
#include <iostream>
#include <cstdint>
int main( void )
{
uint8_t a = 1;
uint8_t b = 2;
std::cout << "sizeof(a) = " << sizeof( a ) << std::endl;
std::cout << "sizeof(b) = " << sizeof( b ) << std::endl;
std::cout << "sizeof(a+b) = " << sizeof( a+b ) << std::endl;
return 0;
}
Produces the following output:
sizeof(a) = 1
sizeof(b) = 1
sizeof(a+b) = 4
So we could understand that what happens is:
uint8_t c = (uint8_t)((int)(a) + (int)(b));
Apparently, it seems to be in the specification of C as said in this forum.
Additionally, in Visual Studio 2013, writing
auto c = a + b;
And hovering the mouse pointer over c
indicates that the type is int
.
The questions are:
- Why is the conversion needed, why is it in the spec? (If possible, with the answer, I'd like external sources of information to read more on the subject (such as MSDN, cppreference).)
- How does that impact the performance?