From what I can tell, "overflow" works very much like "narrowing" (if not outright the same exact thing).
For example, an unsigned char object with the value 255; its bit pattern would be all 1s: 1111 1111
So, by adding 1 to the object: char_object++;
Widening would occur on some temporary object, having 1 added to it, so the bit pattern (on the temporary) should be 256: 0000 0001 0000 0000
The temporary is then copy-assigned into the original object, causing narrowing (losing the left-most byte), which leaves it with the value of 0.
If this works as narrowing does, I'm curious why the standard suggests that on some machines, overflow causes exceptions? Some books even suggest undefined behavior as the result. Would this not imply that narrowing would do the same thing on said machines? If they are not the same thing, then how are they different?
(edit:) Perhaps by comparing the bit-pattern of an unsigned 8-bit object against that of a signed 8-bit object, this can be made clearer? Seems that in 2's complement, the bit pattern doesn't change, but the representation does. Anyway, this still doesn't truly answer the question, "What's the difference between narrowing, and overflow?" Because they still seem to be the same thing:
#include <bitset>
#include <cstdint>
#include <iostream>
void show_bits(int8_t&);
int main()
{
for (int8_t number{ 1 }; number; ++number)
{
show_bits(number);
}
return 0;
}
void show_bits(int8_t& number)
{
std::cout << static_cast<int16_t>(number) << ' ';
std::cout << '(' << static_cast<uint16_t>(static_cast<uint8_t>(number)) << ')' << '\t';
std::bitset<sizeof(int8_t) * 8> bits_of(number);
std::cout << bits_of << '\n';
}