0

Considering that I'm compiler a 32 bits application (even though it doesn't change anything), does this code is safe?

cout << (numeric_limits<unsigned int>::max() + 1) << endl;

It prints "0" but does affect a bit of another variable? Let's say I have a byte(char) with the following bits and then I add 1: 1111 1111 (255). Would the result be "1 0000 0000" (256) and the cpu would only read the last 8 bits as my variable or would just reset the bit sequence?

Benoît Dubreuil
  • 650
  • 1
  • 11
  • 27
  • possible duplicate of [Is using unsigned integer overflow good practice?](http://stackoverflow.com/questions/988588/is-using-unsigned-integer-overflow-good-practice) – Joe Jan 03 '15 at 02:50
  • How on earth does this question have anything to do with the close reason? – T.C. Jan 03 '15 at 03:59

2 Answers2

4

C++ standard draft, §3.9.1.4 requires that

Unsigned integers, declared unsigned, shall obey the laws of arithmetic modulo 2n where n is the number of bits in the value representation of that particular size of integer.

So the code in your question is required to output 0, as required by laws of modulo arithmetic.

Note that the rule mentioned above doesn't apply to char, as it's not declared unsigned (you would need to use unsigned char instead).

milleniumbug
  • 15,379
  • 3
  • 47
  • 71
2

Unsigned integer types obey the laws of arithmetic modulo 2^N. The result in this case will always be 0 and no other memory will be overwritten.

Brian Bi
  • 111,498
  • 10
  • 176
  • 312