2

From what I can tell, "overflow" works very much like "narrowing" (if not outright the same exact thing).

For example, an unsigned char object with the value 255; its bit pattern would be all 1s: 1111 1111

So, by adding 1 to the object: char_object++;

Widening would occur on some temporary object, having 1 added to it, so the bit pattern (on the temporary) should be 256: 0000 0001 0000 0000

The temporary is then copy-assigned into the original object, causing narrowing (losing the left-most byte), which leaves it with the value of 0.

If this works as narrowing does, I'm curious why the standard suggests that on some machines, overflow causes exceptions? Some books even suggest undefined behavior as the result. Would this not imply that narrowing would do the same thing on said machines? If they are not the same thing, then how are they different?

(edit:) Perhaps by comparing the bit-pattern of an unsigned 8-bit object against that of a signed 8-bit object, this can be made clearer? Seems that in 2's complement, the bit pattern doesn't change, but the representation does. Anyway, this still doesn't truly answer the question, "What's the difference between narrowing, and overflow?" Because they still seem to be the same thing:

#include <bitset>
#include <cstdint>
#include <iostream>

void show_bits(int8_t&);

int main()
{
    for (int8_t number{ 1 }; number; ++number)
    {
        show_bits(number);
    }
    return 0;
}

void show_bits(int8_t& number)
{
    std::cout << static_cast<int16_t>(number) << ' ';
    std::cout << '(' << static_cast<uint16_t>(static_cast<uint8_t>(number)) << ')' << '\t';
    std::bitset<sizeof(int8_t) * 8> bits_of(number);
    std::cout << bits_of << '\n';
}
j_burks
  • 93
  • 5
  • 2
    Conversion isn't overflow. `c = 100000;` is fine, with perhaps implementation-defined value. Overflow is something that happens in an arithmetic expression, e.g. `int n = INT_MAX; ++n;` has overflow. Only signed integers overflow. (So if `sizeof(unsigned int) == 1`, then unsigned char is promoted to unsigned int, and `char_object++` does not overflow.) – Kerrek SB Jul 04 '17 at 00:35
  • 1
    "Widening would occur on some temporary object, having 1 added to it, so the bit pattern (on the temporary) should be 256: 0000 0001 0000 0000 The temporary is then copy-assigned into the original object, causing narrowing (losing the left-most byte), which leaves it with the value of 0." This might be true on a conceptual level. – Captain Giraffe Jul 04 '17 at 00:44
  • @CaptainGiraffe, it is why I had to ask this potentially inane question. I don't know enough about computer architecture, or enough about how C++ code works "under the hood." I think many programmers wouldn't really care about this sort of thing, but I am strange in that regard. – j_burks Jul 04 '17 at 00:51
  • Possible duplicate of [Why is unsigned integer overflow defined behavior but signed integer overflow isn't?](https://stackoverflow.com/questions/18195715/why-is-unsigned-integer-overflow-defined-behavior-but-signed-integer-overflow-is) – geza Jul 04 '17 at 00:55
  • @KerrekSB, I sort of get what you mean. A signed int with the maximum value that type could hold would have the bit pattern 1111 1111 1111 1111 1111 1111 1111 1111. Adding one would become 1000 0000 0000 0000 0000 0000 0000 0000. – j_burks Jul 04 '17 at 00:59

1 Answers1

1

There is no intuition to find here, only specification. https://stackoverflow.com/a/83763/451600. The spec may have historical reasons for its decisions.

For instance, the fact that ++unsigned_max is well defined and ++signed_max is not, is likely related to the fact that not all signed numbers are represented the same way. 2's complement isn't mandated.

The compiler is a black box to a c++ programmer/program, as long as it follows spec.

Captain Giraffe
  • 14,407
  • 6
  • 39
  • 67