This code prints -56
and I'm not sure why
#include<stdio.h>
main()
{
printf("%d\n", ((int)(char)200));
}
This code prints -56
and I'm not sure why
#include<stdio.h>
main()
{
printf("%d\n", ((int)(char)200));
}
Because char
implicitly is signed char
on your platform.
Do
printf("%d\n", ((int)(unsigned char)200));
to get
200
The type char
is inconsistent towards the rest of the C language, since char
can be either signed or unsigned. The C standard states it as implementation-defined behavior, meaning that the compiler can implement it in either way. Your particular compiler seems to implement char
as equal to signed char
.
Because of the above, char
should never be used for anything else but strings. It is bad practice to use for arithmetic operations. Such code relies on impl.defined behavior.
The type int
is always equal to signed int
. This is required by the C standard.
Thus on your specific system, the code you have written is equal to: (signed int)(signed char)200
.
When you attempt to store the value 200 in a variable equivalent to signed char
, it will overflow and get interpreted as -56 (on a two's complement system).
When you cast a signed char
containing -56 to a signed int
, you get -56.
The number 200 is out of the value range for signed char
. It wraps around into the negative value range in a two's complement way.
Obviously the (implicit) signedness of char
is defined to be signed
on your platform. Note that this is not mandated by the standard. If it were unsigned
, your program would print 200. For further reading on this topic:
Obviously, the size of the char
type is 8 bits on your platform. The definition of this quantity is unspecified by the language specification as well. If it were larger, the program may very well print 200, just as you expected. For further reading on this topic:
Also note, that it is not specified by the standard that integer types must use the two's complement binary representation. On a different platform, the cast may even produce a completely different result. For further reading:
char
is signed
, and int
is signed
(by default on your platform).
(signed char)11001000
= -56 decimal
(signed int)0000000011001000
= 200 decimal
Have a look at Signed number representations