0

Total C newbie here. When I run the following code:

int main(int argc, char *argv[])
{
  int unsigned x = 1;
  printf("%d\n", x);
  x = x - 2;
  printf("%d\n", x);
}

I get the output:

1
-1

i.e. signed output.

The compilation command was cc -Wall -g -fno-stack-protector exCurrent.c -o exCurrent and I ran this on a recent macOS 10.14.

What's happening?

ruohola
  • 21,987
  • 6
  • 62
  • 97
Jack Kinsella
  • 4,491
  • 3
  • 38
  • 56

3 Answers3

2

printf doesn't know anything about the type of the variable you are passing to it. It simply sees its bits and then tries to represent that bit pattern based on the formatter (%d here).

As you can see, the bit patterns of signed and unsigned integers are exactly the same:

int main(int argc, char *argv[])
{
  unsigned int x = -1;
  int y = -1;
  printf("%X\n", x);
  printf("%X\n", y);
}

Output:

FFFFFFFF
FFFFFFFF

You can use the %u formatter to see the unsigned int representation for the variable:

int main(int argc, char *argv[])
{
  unsigned int x = 1;
  printf("%u\n", x);
  x = x - 2;
  printf("%u\n", x);
}

Output:

1
4294967295
ruohola
  • 21,987
  • 6
  • 62
  • 97
1

C respected your unsigned operator.

The problem is that printf just recieves the operands without and thinks that it is a signed int because of the %d.

This means, you should not use %d but the format string for unsigned integers, %u instead.

While substracting 2 from 1, you would get -1 as a signed integer. The unsigned equivalent is the highest unsigned integer possible(depending on your system, e.g. 2^32-1 if it is an 32-Bit integer)

So, if you use %u instead of %d, you'd get a (the highest possible if you don't consider long) integer result.

dan1st
  • 12,568
  • 8
  • 34
  • 67
1

Check format specifiers in c. Its a %u for unsigned integers.