0

I'm reading a book that says:

"In an expression like a * b, if a is -1 and b is 1, then if both a and b are ints, the value is, as expected -1. However, if a is int and b is an unsigned, then the value of this expression depends on how many bits an int has on the particular machine. On our machine, this expression yields 4294967295."

My question is:

Why the value of this expression depends on how many bits an int has on the particular machine? And how do they get that result of 4294967295 (at the end of the text).

Sorry for my bad english.

  • The difference between an *unsigned* and *signed* int is that a *signed* integer reserves a bit to indicate the sign. If you have a 16-bit value, you may need 17 bits for the sign. – Thomas Matthews Jul 01 '18 at 20:21

4 Answers4

1

When multiplying signed and unsigned, the signed value is promoted to unsigned. Promotion rules dictate that -1 gets converted to the maximally representable value (this answer explains nicely how conversion actually is done according to the standard), an unsigned value having all bits set (with 2's complement, this is an identity operation...).

Now C++ standard (in this respect conforming the C standard) does not dictate how many bits an int must have, all that is required is that unsigned int needs to be able to hold the integral values from 0 up to 65535 (hexadecimal FFFF) (*). To meet the requirement, (unsigned) int cannot be smaller than 16 bit, but more bits are allowed as well, e. g. 32 on your machine you are testing with. Now guess which value results from setting all of these 32 bits to 1 (hexadecimal FFFFFFFF)...

(*)Not exactly true, actually the standard additionally dictates int not being smaller than short and not being greater than long, but that is not relevant here...

Aconcagua
  • 24,880
  • 4
  • 34
  • 59
1

It's because of type conversions that are applied to operands of different type before the arithmetic operation. Confer, for example, Arithmetic operators at cppreference.com:

Arithmetic operators

... integral conversions are applied to produce the common type, as follows:

..., if the unsigned operand's conversion rank is greater or equal to the conversion rank of the signed operand, the signed operand is converted to the unsigned operand's type.

So when you multiply (int) with (unsigned int), the int-value will be converted to an unsigned int right before the multiplication is done. And if your architecture uses 2-complement to represent negative values, a -1 is represented as FFFFFFFFFFFFFF when unsigned int is 64 bits, or as FFFFFFFF when unsigned int is 32 bits.

You seem to have 32 bit unsigned ints, because 4294967295 is the same as FFFFFFFF, i.e. (2^32 - 1)

That's why. Hope it helps.

Stephan Lechner
  • 34,891
  • 4
  • 35
  • 58
  • *"if your architecture uses 2-complement"* might give wrong impression; signed to unsigned conversion is [well defined](https://stackoverflow.com/a/50632/1312382) by the standard and does not depend on integer representation (whereas the other direction really is implementation defined). So even with 1sc or sign magnitude you'd get UINT_MAX after conversion. Nice thing about 2sc, though, is that this operation is identity... – Aconcagua Jul 02 '18 at 05:34
0

With an unsigned int on n bits, you can encode numbers from 0 to 2^n - 1 (included)

4294967295 = 2^32 - 1

Olivier Sohn
  • 1,292
  • 8
  • 18
0

In the simplest terms possible: the signed variable is converted to an unsigned variable. When an unsigned variable goes below zero, it cycles back to its maximum value and subtracts from there instead of becoming negative (which it cannot do). In a 32 bit system the maximum value is 4294967296. If you subtract 1 from that and it becomes that numner that the show you. If there are different bit numbers on that machine the maximum will be different and therefore so will the number.

  • 1
    Maximum number is one less! Standard actually dictates adding UINT_MAX + 1 to the negative value until it gets non-negative. – Aconcagua Jul 02 '18 at 05:37