int main() {
int a = 100000;
char b;
b = a;
}
I assumed that this code wouldn't compile, but it does. Could someone give me a solid explanation of how c handles implicit type conversion?
int main() {
int a = 100000;
char b;
b = a;
}
I assumed that this code wouldn't compile, but it does. Could someone give me a solid explanation of how c handles implicit type conversion?
C is not type safe. It heavily relies on the user and therefore assumes he knows what he's doing.
This assignment is implementation dependent ìnt
and will assign the least significant byte to the char
variable.
Assuming the size of an int
of your host architecture is 32 bits, and being your ìnt
by default signed
, then the range the ìnt
can store is of
[-1 * (2^(32 - 1) -1) , 2^(32 - 1)] = [-2147483647, 2147483648]
Compare that to the range that an unsigned char
can store (depends on host architecture but char
usually being 8-bit long):
[0, 255]
You can think of char
as being an integer limited to that range. It might get confusing since it is commonly used to store characters.