I tried coming up with a small/simple program to understand how different types of assignments work in C/C++ and what really is happening electronically. Consider this program below :
#include <iostream>
int main() {
int i = 32769;
short s = i;
std :: cout << s << std :: endl;
return 0;
}
When the above program is executed, the result of execution is : -32767. I am trying to reason about why the program outputs the above number and I am not sure if my reasoning is correct, so looking for some confirmation here and also some elaborate understanding of why does this happen :
Integer in C/C++ is allocated 4 bytes of storage in memory and we are assigning the integer variable a value of 32769 which essentially in binary is = 2^15 + 2^0
Shorts in C/C++ are allocated 2 bytes of storage in memory and we assign the value of the integer variable to short. The interesting thing here is that the 15th bit in the integer representation is set, however when the assignment happens the 2 most significant bytes of integers are punted and the run time system thinks that it's trying to store a -1 in the short variable
As the run time system thinks that its trying to store a -1 (because the most significant bit of the short variable is set), it tries to store the 2's complement notation of the number -1 instead which is = 1111 1111 1111 1111 (The 16 bit being the sign bit), this number evaluates to - (2^15 - 1)
I am just trying to see if my understanding of what's going on electronically is correct or not and also gain a better understanding if I am incorrect!