-2

I just simply wanted to know, who is responsible to deal with mathematical overflow cases in a computer ?

For example, in the following C++ code:

short x = 32768;
std::cout << x;

Compiling and running this code on my machine gave me a result of -32767

A "short" variable's size is 2 bytes .. and we know 2 bytes can hold a maximum decimal value of 32767 (if signed) .. so when I assigned 32768 to x .. after exceeding its max value 32767 .. It started counting from -32767 all over again to 32767 and so on .. What exactly happened so the value -32767 was given in this case ? ie. what are the binary calculations done in the background the resulted in this value ?

So, who decided that this happens ? I mean who is responsible to decide that when a mathematical overflow happens in my program .. the value of the variable simply starts again from its min value, or an exception is thrown for example, or the program simply freezes .. etc ?

Is it the language standard, the compiler, my OS, my CPU, or who is it ? And how does it deal with that overflow situation ? (Simple explanation or a link explaining it in details would be appreciated :) )

And btw, pls .. Also, who decides what a size of a 'short int' for example on my machine would be ? also is it a language standard, compiler, OS, CPU .. etc ?

Thanks in advance! :)

Edit: Ok so I understood from here : Why is unsigned integer overflow defined behavior but signed integer overflow isn't?

that It's the processor who defines what happens in an overflow situation (like for example in my machine it started from -32767 all over again), depending on "representations for signed values" of the processor, ie. is it sign magnitude, one's complement or two's complement ...

is that right ? and in my case (When the result given was like starting from the min value -32767 again.. how do you suppose my CPU is representing the signed values, and how did the value -32767 for example come up (again, binary calculations that lead to this, pls :) ? )

Community
  • 1
  • 1
Stefan Madlo
  • 53
  • 1
  • 2
  • 7

1 Answers1

0

It doesn't start at it's min value per se. It just truncates its value, so for a 4 bit number, you can count until 1111 (binary, = 15 decimal). If you increment by one, you get 10000, but there is no room for that, so the first digit is dropped and 0000 remains. If you would calculate 1111 + 10, you'd get 1.

You can add them up as you would on paper:

  1111
  0010
  ---- +
 10001

But instead of adding up the entire number, the processor will just add up until it reaches (in this case) 4 bits. After that, there is no more room to add up any more, but if there is still 1 to 'carry', it sets the overflow register, so you can check whether the last addition it did overflowed.

Processors have basic instructions to add up numbers, and they have those for smaller and larger values. A 64 bit processor can add up 64 bit numbers (actually, usually they don't add up two numbers, but actually add a second number to the first number, modifying the first, but that's not really important for the story).

But apart from 64 bits, they often can also add up 32, 16 and 8 bit numbers. That's partly because it can be efficient to add up only 8 bits if you don't need more, but also sometimes to be backwards compatible with older programs for a previous version of a processor which could add up to 32 bits but not 64 bits.

Such a program uses an instruction to add up 32 bits numbers, and the same instruction must also exist on the 64 bit processor, with the same behavior if there is an overflow, otherwise the program wouldn't be able to run properly on the newer processor.

Apart from adding up using the core constructions of the processor, you could also add up in software. You could make an inc function that treats a big chunk of bits as a single value. To increment it, you can let the processor increment the first 64 bits. The result is stored in the first part of your chunk. If the overflow flag is set in the processor, you take the next 64 bits and increment those too. This way, you can extend the limitation of the processor to handle large numbers from software.

And same goes for the way an overflow is handled. The processor just sets the flag. Your application can decide whether to act on it or not. If you want to have a counter that just increments to 65535 and then wraps to 0, you (your program) don't need to do anything with the flag.

GolezTrol
  • 114,394
  • 18
  • 182
  • 210
  • Sorry I didn't know that about unsigned values, and I didn't mean to ask for this particular case as an unsigned value.. It was just an example to illustrate my question .. I want to know the general case, thank you btw for that information about unsigned values, I've edited the question to clear up what I meant with a signed value in the example now – Stefan Madlo Jun 29 '15 at 19:50
  • so, a signed value overflow results in an undefined behaviour, and the machine processor defines what will happen in a case of an overflow and It's the processor who handles it all right? . . anyway I'll read your edit on the answer now :) and you please read my edit on the question and if you see that you can add anything else to your answer (I haven't read your edit yet), please do :) thank you Golez :) – Stefan Madlo Jun 29 '15 at 19:53
  • "The processor just sets the flag. Your application can decide whether to act on it or not." How does the application act? you mean when the overflow flag is set, an exception is thrown and if the app's code catches that exception it can act there with that overflow ? I'm really sorry but I'm confused :( ... – Stefan Madlo Jun 29 '15 at 20:11
  • Ok now all what made me ask this question is a sentence I read in some book .. I just want an answer to this now that's all please help me :) ... The book says smthn like ... (In pascal, "suppose a 2 byte variable", adding 32767 + 1 always give the value -32767, and adding 32767 + 2 gives -32766 ..etc). I mean is this even right to say ? can we say .. "In C++, when an overflow occurs .. bla bla always happens" ?? , I mean this is supposed to be depending on the CPU, not the language standard?, and the result will be different from machine to another right ? – Stefan Madlo Jun 29 '15 at 20:23