0

I have assigned -1 to unsigned int and assuming it would produce an error I compiled the code, much to my astonishment it didn't. I can't understand the reason behind it.

I have tried printing the values to check them manually, but it always shows -1.


#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
int main(int argc, char *argv[]) {
    unsigned int x = -1;
    printf("%u\n", x);
    int y = -1;
    printf("%d\n", y);
    if (x == y) {
        printf("\nsame");
    } else {
        printf("n s");
    }
    return 0;
}

the expected result would have been an error or a warning but it compiled as is.

tomerpacific
  • 4,704
  • 13
  • 34
  • 52
  • maybe just look at this question [Your Answer](https://stackoverflow.com/questions/5169692/assigning-negative-numbers-to-an-unsigned-int) – Morta Jun 24 '19 at 10:49
  • 3
    Time to learn about [two's complement](https://en.wikipedia.org/wiki/Two%27s_complement) (which is the most common way to represent negative integer values on binary computers). – Some programmer dude Jun 24 '19 at 10:50
  • 3
    Possible duplicate of [What happens if I assign a negative value to an unsigned variable?](https://stackoverflow.com/questions/2711522/what-happens-if-i-assign-a-negative-value-to-an-unsigned-variable) – KamilCuk Jun 24 '19 at 10:50
  • If you are using gcc, try to compile using the option -wall (all warnings). – Sir Jo Black Jun 24 '19 at 10:51
  • Unsigned types use what modulo arithmetic. So `unsigned int x = -1` initialises `x` with the largest value that an `unsigned int` can represent (a large positive value). You'll see that if you print it out. – Peter Jun 24 '19 at 10:52
  • @Peter I don't thing this is true in all CPU architectures. In some case is better to consider the use of this assignement such an UB. – Sir Jo Black Jun 24 '19 at 11:00
  • @SirJoBlack: No Peter is correct. – Bathsheba Jun 24 '19 at 11:03
  • 1
    @SirJoBlack - the behaviour is specified in all C standards. Historically, it was specified that way in C, because all real-world implementations do it that way. And that is still the case. It is signed integer overflow or underflow that yields undefined behaviour, and there is variation in how hardware does it. – Peter Jun 24 '19 at 11:03
  • @Peter, thanks for the clarification. I used, a lot of year ago, a compiler that gave me problems because it didn't use the 2 complements. In that compiler -1 (in a short int) was 0x8001! – Sir Jo Black Jun 24 '19 at 11:15

2 Answers2

2

It works because this:

unsigned int x = -1;

causes the int-typed literal -1 to be converted to an unsigned int which is a standard, well-specified, conversion. The draft C11 spec says:

6.3.1.3 Signed and unsigned integers

1 When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.

2 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.60)

The latter is what is happening here, so assuming 32-bit integers 232 is being added, making the result 0xffffffff.

I don't believe that the printf() with %u prints -1, that would be a rather major bug.

unwind
  • 391,730
  • 64
  • 469
  • 606
  • Oh yes, I was just playing around with the format specifiers to get a hint of what's going on and I think I may have left it at that before posting the code from my IDE. – Divyansh Bhardwaj Jun 24 '19 at 11:32
0

This is allowed by the standard. When you assign -1 to the unsigned type, conceptually at least, the value is converted to a number that can be represented by the repeated addition of 2n where n is the number of bits in that type.

This conversion also happens to the signed type in the comparison x == y (that comparison takes place in unsigned arithmetic since one of the variables is an unsigned type).

Bathsheba
  • 231,907
  • 34
  • 361
  • 483