I dont understand what is happening when I multiply char variable and and int variable and put the result into a char.
For example:
char p= (98*'b') ;
printf("%d",p);
Why it is -124?
I expected for overflow.
I dont understand what is happening when I multiply char variable and and int variable and put the result into a char.
For example:
char p= (98*'b') ;
printf("%d",p);
Why it is -124?
I expected for overflow.
It is because the char is signed. A signed char has a value of -128 to 127.
So, when you get the value 0x2584, the MSB is ignored and you actually get the value 0x84.
Now, 0x84 in binary is:
1000 | 0100 = 0x84
The Most Significant Bit is set, hence it being treated as a negative number.
So, if the Most Significant Bit is set we have:
1000 | 0000 = 0x80 = -128
1000 | 0001 = 0x81 = -127
1000 | 0010 = 0x82 = -126
1000 | 0011 = 0x83 = -125
1000 | 0100 = 0x84 = -124
Which is what you get.
If you are looking for a value of 0x2584 then you need to use a 16 bit datatype as a minimum, which would probably be a regular int on your platform.
If you are looking for the actual value of 0x84 then you need to use an unsigned char datatype. Something like:
unsigned char p= (98*'b') ;
printf("%d",p);
What happens "between the lines":
(98*'b')
the parenthesis adds nothing.98
and 'b'
are of type int
.int
and the result is of type int
. 9604 decimal or 0x2584 hex.char
which cannot hold such a large value. Further problematic is that the char
type has implementation-defined signedness and could be either signed or unsigned, it is a non-portable type when it comes to storing values. Is char signed or unsigned by default?char
is unsigned, the conversion is well-defined and you get the value 0x84 hex / 132 dec.char
is signed, the conversion is implementation-defined. Most mainstream systems with 2's complement will however do the equivalent of just translating 0x84 to a 1 byte 2's complement number. Meaning -124.char
to printf("%d",...
is strictly speaking not-well defined since %d
means it expects an integer. This happens to work "by accident" since parameters passed to variadic functions undergo an implicit type promotion ("default argument promotions") and in case of char
it gets promoted to int
. To print signed char
integer values reliably, use %hhd
.Best practices:
char
for anything else but to store characters. AVoid using it for arithmetic, particularly bitwise operations.uint8_t
in stdint.h.printf
.I dont understand what is happening when I multiply
char
variable and andint
variable and put the result into a char. Why it is -124?
Well, you got an overflow. the result of 98 * 'b'
(which is also 98 * 98) is 9604
but a char
must be a number between (a signed char
) -128 and 127. Finally you got it overflowed, being the final result Undefined Behaviour (which can be anything, even the correct result, but not this time) So you got what was unexpected :)
In C, integer arithmetic can overflow without you being notified, because no error, and no exception is fired when it happens. You must be aware of the types you use, and you should be aware that 98^2 doesn't fit as a positive number in the range 0..127 (which is the range of a signed char
). So you have experienced an overflow, and the compiler didn't say anything about it, which is the expected Undefined Behaviour.
Upto here, this is what you got from the C standard perspective. The most probable thing that could happen is that you got the shown result after cutting down all the bits of the number that don't fit in a char
(9604 mod 256 -> 132, unsigned, and this is the unsigned char
representation of two's complement signed char
-124, this is, 256 - 124 = 132).