Code
char a;
a = 0xf1;
printf("%x\n", a);
Output
fffffff1
printf()
show 4 bytes, that exactly we have one byte in a
.
What is the reason of this misbehavior?
How can i correct it?
Code
char a;
a = 0xf1;
printf("%x\n", a);
Output
fffffff1
printf()
show 4 bytes, that exactly we have one byte in a
.
What is the reason of this misbehavior?
How can i correct it?
printf
is a variable argument function, so the compiler does its best but cannot check strict compliance between format specifier and argument type.
Here you're passing a char
with a %x
(integer, hex) format specifier.
So the value is promoted to a signed integer (because > 127: negative char and char
is signed on most systems, on yours that's for sure)
Either:
a
to int
(simplest)a
to unsigned char
(as suggested by BLUEPIXY) that takes care of the sign in the promotion%hhx
as stated in the various docs (note that on my gcc 6.2.1 compiler hhx
is not recognized, even if hx
is)note that the compiler warns you before reaching printf
that you have a problem:
gcc -Wall -Wpedantic test.c
test.c: In function 'main':
test.c:6:5: warning: overflow in implicit constant conversion [-Woverflow]
a = 0xf1;
What is the reason of this misbehavior?
This question looks strangely similar to another I have answered; it even contains a similar value (0xfffffff1
). In that answer, I provide some information required to understand what conversion happens when you pass a small value (such as a char
) to a variadic function such as printf
. There's no point repeating that information here.
If you inspect CHAR_MIN
and CHAR_MAX
from <limits.h>
, you're likely to find that your char
type is signed, and so 0xf1
does not fit as an integer value inside of a char
.
Instead, it ends up being converted in an implementation-defined manner, which for the majority of us means it's likely to end up with one of the high-order bits becoming the sign bit. When these values are promoted to int
(in order to pass to printf
), sign extension occurs to preserve the value (that is, a char
that has a value of -1 should be converted to an int that has a value of -1 as an int
, so too is the underlying representation for your example likely to be transformed from 0xf1
to 0xfffffff1
).
printf("CHAR_MIN .. CHAR_MAX: %d .. %d\n", CHAR_MIN, CHAR_MAX);
printf("Does %d fit? %s\n", '\xFF', '\xFF' >= CHAR_MIN && '\xFF' <= CHAR_MAX ? "Yes!"
: "No!");
printf("%d %X\n", (char) -1, (char) -1); // Both of these get converted to int
printf("%d %X\n", -1, -1); // ... and so are equivalent to these
How can i correct it?
Declare a
with a type that can fit the value 0xf1
, for example int
or unsigned char
.
You should use int a
instead of char a
because char
is unsigned and can store only 1 byte from 0 to 255.
And hex number need many storage to store it, and also int
storage size is 2 or 4 bytes. So it's good to use int
here to store hex number.