0

I know that when overflow occurs in C/C++, normal behavior is to wrap-around. For example, INT_MAX+1 is an overflow.

Is possible to modify this behavior, so binary addition takes place as normal addition and there is no wraparound at the end of addition operation ?

Some Code so this would make sense. Basically, this is one bit (full) added, it adds bit by bit in 32

int adder(int x, int y)
{
    int sum;
    for (int i = 0; i < 31; i++)
    {
        sum = x ^ y;
        int carry = x & y;
        x = sum;
        y = carry << 1;
    }

    return sum;
}

If I try to adder(INT_MAX, 1); it actually overflows, even though, I amn't using + operator.

Thanks !

newprint
  • 6,936
  • 13
  • 67
  • 109

3 Answers3

4

Overflow means that the result of an addition would exceed std::numeric_limits<int>::max() (back in C days, we used INT_MAX). Performing such an addition results in undefined behavior. The machine could crash and still comply with the C++ standard. Although you're more likely to get INT_MIN as a result, there's really no advantage to depending on any result at all.

The solution is to perform subtraction instead of addition, to prevent overflow and take a special case:

if ( number > std::numeric_limits< int >::max() - 1 ) { // ie number + 1 > max
    // fix things so "normal" math happens, in this case saturation.
} else {
    ++ number;
}

Without knowing the desired result, I can't be more specific about the it. The performance impact should be minimal, as a rarely-taken branch can usually be retired in parallel with subsequent instructions without delaying them.

Edit: To simply do math without worrying about overflow or handling it yourself, use a bignum library such as GMP. It's quite portable, and usually the best on any given platform. It has C and C++ interfaces. Do not write your own assembly. The result would be unportable, suboptimal, and the interface would be your responsibility!

Potatoswatter
  • 134,909
  • 25
  • 265
  • 421
2

No, you have to add them manually to check for overflow.

Potatoswatter
  • 134,909
  • 25
  • 265
  • 421
nerozehl
  • 463
  • 1
  • 7
  • 19
1

What do you want the result of INT_MAX + 1 to be? You can only fit INT_MAX into an int, so if you add one to it, the result is not going to be one greater. (Edit: On common platforms such as x86 it is going to wrap to the largest negative number: -(INT_MAX+1). The only way to get bigger numbers is to use a larger variable.

Assuming int is 4-bytes (as is typical on x86 compilers) and you are executing an add instruction (in 32-bit mode), the destination register simply does overflow -- it is out of bits and can't hold a larger value. It is a limitation of the hardware.

To get around this, you can hand-code, or use an aribitrarily-sized integer library that does the following:

  • First perform a normal add instruction on the lowest-order words. If overflow occurs, the Carry flag is set.
  • For each increasingly-higher-order word, use the adc instruction, which adds the two operands as usual, but takes into account the value of the Carry flag (as a value of 1.)

You can see this for a 64-bit value here.

Jonathon Reinhart
  • 132,704
  • 33
  • 254
  • 328
  • 2
    With 2's complement arithmetic it won't wrap to zero, it will wrap to `-(INT_MAX+1)`. And of course it can be compiler dependent although I'm not aware of one that won't let the underlying processor architecture make the determination. – Mark Ransom Feb 28 '12 at 03:20
  • Thank you, corrected. I had unsigned integers in my head. Sorry about that. – Jonathon Reinhart Feb 28 '12 at 03:22
  • 1
    try "if you add one to it, **the result is not going to be one greater**" – Potatoswatter Feb 28 '12 at 03:22
  • 1
    The question is for C++, not x86-64 assembly. Wraparound is not guaranteed and there is no carry flag. – Potatoswatter Feb 28 '12 at 03:33
  • Care to explain how wraparound could not occur? – Jonathon Reinhart Feb 28 '12 at 03:35
  • 3
    The machine could do anything; it's undefined behavior. Some machines implement saturation or integral overflow exceptions. You could get a wraparound to `-INT_MAX` on a one's complement machine, or any other bizarre result. – Potatoswatter Feb 28 '12 at 03:44
  • @Potatoswatter, as I said the compiler will typically leave the behavior to be defined by the underlying processor architecture. If you're aware of a counter-example I'd love to hear it. – Mark Ransom Feb 28 '12 at 04:25
  • The argument could be made that this is processor-dependent, but it is more than very likely the OP is on x86, in which case the value would wrap at 0x7FFFFFFF + 1 => -0x80000000. – Jonathon Reinhart Feb 28 '12 at 04:36
  • @MarkRansom: See [this question](http://stackoverflow.com/questions/9476036/c-only-unary-minus-for-0x80000000/9476111#9476111), asked at almost the same time by a different person. In an integral constant expression, overflow results in a compiler error regardless of the architecture. (This is required, not undefined behavior.) Anyway, even granting this answer's unnecessary assumptions, it's still bad advice. – Potatoswatter Feb 28 '12 at 05:45
  • 2
    @Jonathon: -1. Undefined behaviour is undefined. *You can't possibly rely on anything, **by definition***. See this question http://stackoverflow.com/questions/7682477/gcc-fail-or-undefined-behavior for an example of "usually X happens, but sometimes everything gets frakked up beyond all repair". – R. Martinho Fernandes Feb 29 '12 at 01:00
  • I'm sorry, but I don't hear "normal addition without wraparound" as "saturation". If the OP wanted saturation, he should have specified that when he posted. Note that the first sentence of my answer has still been correct. – Jonathon Reinhart Feb 29 '12 at 05:10