0
#include<stdio.h>
int main() {
    int i = 3771717;
    printf ("%c", i);
    return 0;
}

The output is E. Isn't 69 the ASCII for E?

Anagh Basak
  • 147
  • 9

1 Answers1

0

The %c format specifier expects an int argument which is then converted to and unsigned char as printed as a character.

The int value 3771717 gets converted to the unsigned char value 69 as per the conversion rules from a signed integer to unsigned integer. The C standard specifies this is done by repeatedly subtracting one more than the maximum value for unsigned char until it is in range. In practice, that means truncating the value to the low order byte. 3771717 decimal is 398D45 hex so that leaves us with 45 hex which is 69 decimal.

Then the character code for 69 i.e. 'E' is printed.

dbush
  • 205,898
  • 23
  • 218
  • 273
  • Out of topic question, is this how every type conversion from a wider operand to a narrower operand done. That is by repeated subtraction? – Anagh Basak Oct 15 '20 at 14:33
  • 2
    @AnaghBasak If the new type is unsigned yes, if signed then it's implementation defined. Keep in mind this is how the C standard describes what the result should be, not how it's actually implemented. Most two's complement machines will just mask out the higher bytes. – dbush Oct 15 '20 at 14:36
  • 2
    @AnaghBasak you might notice that "one more than the maximum" is `0x100` for ´unsigned char` and the result is the same as just chopping all bytes except the lowest significant. – Gerhardh Oct 15 '20 at 14:36
  • 1
    @AnaghBasak Essentially yes. Converting a 64 bit `long` to a 32 bit `int` (assuming two's complement) means taking the 4 lowest bytes (i.e. the 8 lowest hex digits). – dbush Oct 15 '20 at 14:41
  • So my conclusion is if I try to convert x type to y. Provide x is wider than y. What we can do is just take the last `2 * sizeof (y)` digits from the hex value of x. – Anagh Basak Oct 15 '20 at 14:46