0

So i've been running a C code with an unsigned integer value set to 4294967295 which is the maximum value that can be stored in an unsigned int variable in C . Upon adding the value of 2147483648 to it in the statement of (v+=2147483648 ;) and then using gdb to disassemble the compiled executable file of this code in order to check the eflags after running this exact statement , I get a Carry flag (CF) which understandably will show up since we went out of range of unsigned numbers in this addition , but why exactly do we get an overflow flag (OF) ? I do understand that computers don't care about them being signed or unsigned and have their own way of interpreting wheather we got an overflow flag or a carry or both . I just want to know the mechanism on which it decided to give an overflow flag in addition to a carry flag .

So Here is a screenshot of that C program that i compiled and disassembled back with gdb The C code , and after running the line where v is added to that big value i end up getting both a Carry flag and an Overflow flag gdb disassembled C code view . whats the reason for getting the Overflow flag in particular ? Here is the Assembly code for further details Assembly disassembled code

  • 3
    When considered signed (which is what overflow is interested in) the `4294967295` actually means `-1`. Similarly `2147483648` is `-2147483648`. Adding these two should produce `-2147483649` but that is out of range for signed (the result will be `+2147483647` instead) hence the overflow. Technically, the overflow flag for addition is set whenever adding two numbers of the same sign produces a result with the opposite sign. – Jester Jan 23 '23 at 20:09
  • As Jester says, the processor will set the overflow flag as per signed integer arithmetic (2's complement) rules, which is to interpret both the operands as well as the result all as signed integers. With that view, there is signed integer overflow in your example. As you have stated, the processor doesn't care whether the program is doing signed or unsigned arithmetic, they are both the same except for the definition of unsigned overflow (CF) vs. signed overflow (OF). – Erik Eidt Jan 23 '23 at 20:14
  • 1
    It seems you mean the instructions the compiler generated set OF when you *run* them, not when you *disassemble* them. Disassembly is when GDB or objdump turns machine code into text, and doesn't involve running it. Also, with different tuning or optimization options, a compiler like GCC might have used no instructions (optimizing away unused work), or might have used `lea` to do the same math, but LEA doesn't set FLAGS. So it doesn't matter where you got this code, just what the asm actually does. (Which is to add `-1` to `INT_MIN`, which creates signed overflow.) – Peter Cordes Jan 23 '23 at 20:24
  • Related duplicate re: 2's complement: [Why unsigned int 0xFFFFFFFF is equal to int -1?](https://stackoverflow.com/q/1863153) – Peter Cordes Jan 24 '23 at 00:52
  • The CPU has no idea whether your data is intended to be signed or unsigned. It sets the overflow flag any time an addition or subtraction crosses the `7FFFFFFF-80000000` boundary. The overflow flag being set is only a problem if you care or not. If you're working with unsigned variables, just ignore it. – puppydrum64 Jan 24 '23 at 10:51

0 Answers0