A char
is 8 bits. This means it can represent 2^8=256 unique values. A uchar
represents 0 to 255, and a signed char
represents -128 to 127 (could represent absolutely anything, but this is the typical platform implementation). Thus, assigning 130 to a char
is out of range by 2, and the value overflows and wraps the value to -126 when it is interpreted as a signed char
. The compiler sees 130 as an integer and makes an implicit conversion from int
to char
. On most platforms an int is 32-bit and the sign bit is the MSB, the value 130 easily fits into the first 8-bits, but then the compiler wants to chop of 24 bits to squeeze it into a char. When this happens, and you've told the compiler you want a signed char, the MSB of the first 8 bits actually represents -128. Uh oh! You have this in memory now 1000 0010
, which when interpreted as a signed char is -128+2. My linter on my platform screams about this . .

I make that important point about interpretation because in memory, both values are identical. You can confirm this by casting the value in the printf
statements, i.e., printf("3: %+d\n", (unsigned char)c1);
, and you'll see 130 again.
The reason you see the large value in your first printf
statement is that you are casting a signed char
to an unsigned int
, where the char
has already overflowed. The machine interprets the char
as -126 first, and then casts to unsigned int
, which cannot represent that negative value, so you get the max value of the signed int
and subtract 126.
2^32-126 = 4294967170 . . bingo
In printf
statement 2, all the machine has to do is add 24 zeros to reach 32-bit, and then interpret the value as int
. In statement one, you've told it that you have a signed value, so it first turns that to a 32-bit -126 value, and then interprets that -ve integer as an unsigned integer. Again, it flips how it interprets the most significant bit. There are 2 steps:
- Signed char is promoted to signed int, because you want to work with ints. The char (is probably copied and) has 24 bits added. Because we're looking at a signed value, some machine instruction will happen to perform twos complement, so the memory here looks quite different.
- The new signed int memory is interpreted as unsigned, so the machine looks at the MSB and interprets it as 2^32 instead of -2^31 as happened in the promotion.
An interesting bit of trivia, is you can suppress the clang-tidy linter warning if you do char c1 = 130u;
, but you still get the same garbage based on the above logic (i.e. the implicit conversion throws away the first 24-bits, and the sign-bit was zero anyhow). I'm have submitted an LLVM clang-tidy missing functionality report based on exploring this question (issue 42137 if you really wanna follow it) .