3
#include <stdio.h>

int main() {
    unsigned int a = -10;
    printf("a=%d\n", a);

    return 0;
}

The above code is printing -10 for signed int. If the both signed and unsigned are printing the -10 then what is difference between them?

Blaze
  • 16,736
  • 2
  • 25
  • 44
  • 2
    `%d` treats your integer as signed, use `%u` instead. – Marco Luzzara Feb 12 '19 at 11:11
  • 2
    The behaviuour of this program is *undefined*. It can print -10, or blow up your computer. – n. m. could be an AI Feb 12 '19 at 11:11
  • 1
    You are storing a certain bit pattern in `a` (which is a really big number if interpreted as an unsigned int). But then you tell printf to display the value as if it were signed (that's what %d does). Since the bit pattern is -10 when interpreted as a signed number that's what printf displays. You need a different format string for unsigned - try `printf("a=%u\n",a);` – Jerry Jeremiah Feb 12 '19 at 11:14

3 Answers3

6

printf doesn't know about the type of the argument that you're giving it. With %d you're telling it that it's a signed int, which is wrong. That's undefined behavior - anything could happen. What will likely happen is that it's just interpreting that memory as signed int anyway, and with the unsigned int a=-10; you set the unsigned int to what will be interpreted as -10 when read as signed int. For further info on what happens with that assignment of a negative number to an unsigned type, check out this answer.

Blaze
  • 16,736
  • 2
  • 25
  • 44
6

You actually have undefined behavior in that code.

The "%d" format is for plain signed int, and mismatching format specifier and argument leads to UB.

Since printf doesn't have any idea of the real types being passed, it has to rely only on the format specifiers. So what probably happens is that the printf function simply treats the value as a plain signed int and print it as such.

Some programmer dude
  • 400,186
  • 35
  • 402
  • 621
-1

You should use

printf("a=%u\n",a);

to print "a" as an unsigned integer