-1

This code prints -56 and I'm not sure why

#include<stdio.h>

main()
{
    printf("%d\n", ((int)(char)200));
}
reto
  • 9,995
  • 5
  • 53
  • 52

5 Answers5

4

Because char implicitly is signed char on your platform.

Do

printf("%d\n", ((int)(unsigned char)200));

to get

200
alk
  • 69,737
  • 10
  • 105
  • 255
3
  • The type char is inconsistent towards the rest of the C language, since char can be either signed or unsigned. The C standard states it as implementation-defined behavior, meaning that the compiler can implement it in either way. Your particular compiler seems to implement char as equal to signed char.

  • Because of the above, char should never be used for anything else but strings. It is bad practice to use for arithmetic operations. Such code relies on impl.defined behavior.

  • The type int is always equal to signed int. This is required by the C standard.

  • Thus on your specific system, the code you have written is equal to: (signed int)(signed char)200.

  • When you attempt to store the value 200 in a variable equivalent to signed char, it will overflow and get interpreted as -56 (on a two's complement system).

  • When you cast a signed char containing -56 to a signed int, you get -56.

Lundin
  • 195,001
  • 40
  • 254
  • 396
1

The number 200 is out of the value range for signed char. It wraps around into the negative value range in a two's complement way.

Obviously the (implicit) signedness of char is defined to be signed on your platform. Note that this is not mandated by the standard. If it were unsigned, your program would print 200. For further reading on this topic:

Obviously, the size of the char type is 8 bits on your platform. The definition of this quantity is unspecified by the language specification as well. If it were larger, the program may very well print 200, just as you expected. For further reading on this topic:

Also note, that it is not specified by the standard that integer types must use the two's complement binary representation. On a different platform, the cast may even produce a completely different result. For further reading:

Community
  • 1
  • 1
moooeeeep
  • 31,622
  • 22
  • 98
  • 187
  • 1
    But the OP is using `char`, not `signed char`. The signedness of `char` is implementation-defined, so this answer isn't complete. – Lundin Dec 12 '13 at 12:34
  • I do not think that http://stackoverflow.com/q/17045231/1025391 should be quoted for the explanation “the cast may even produce a completely different result”. The linked question is about representations. The semantics of the cast to `T1` of a value of type `T2` do not have to have anything to do with the representation of either `T1` or `T2`. – Pascal Cuoq Dec 12 '13 at 15:48
  • @PascalCuoq I found a better reference now, I think. – moooeeeep Dec 12 '13 at 18:58
1

char is signed, and int is signed (by default on your platform).

(signed char)11001000 = -56 decimal

(signed int)0000000011001000 = 200 decimal

Have a look at Signed number representations

parrowdice
  • 1,902
  • 15
  • 24
  • 1
    Not at all. char is signed, so when the compiler casts to an int, it perform sign extension, so 11001000 is converted to 11111111111111111111111111001000 (for a 32-bit int) – mcleod_ideafix Dec 12 '13 at 12:34
  • 1
    This is still incorrect. `char` is signed on his platform and so is `int`. – NaotaSang Dec 12 '13 at 12:36
  • Also, the hex codes are complete nonsense. Are you trying to write BCD or something? Or "BCH" I suppose.. binary-coded hexadecimal? – Lundin Dec 12 '13 at 12:48
0

char is 8bits signed from -128~127 so 200 is out of the value range