-1

Can anyone explain why round off errors occur in general? Some explanations online have gone over my head on the internet and I was wondering if anyone had an easy to understand explanation. I have run across this problem occasionally when programming and I have always been stumped as to why this would occur in a computer system. One thing I've noticed is that it occurs when certain numbers are evaluated with certain operations (Ex: multiplication, division, etc...).

For example, sometimes a mathematical calculation would return a number such as 14.000000001 or 45.99999999 instead of the expected values (14 and 46)

Could it be that integer overflow is caused by actions of the programming language in use or some underlying factor in computer systems?

LeonDevy
  • 53
  • 8
  • 1
    Are you asking about integers or floating point numbers? – Jonathon Reinhart Apr 02 '20 at 23:18
  • I am talking about when you perform an operation on two integers and get a large amount of decimal places. Ex: 14.00000000001 or 45.99999999. I'm new to stack overflow. Should I structure this question better? – LeonDevy Apr 02 '20 at 23:20
  • 1
    Some examples of these _operations_ would really help out in this question. – Phil Apr 02 '20 at 23:21
  • Does this answer your question? [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) – Jonathon Reinhart Apr 03 '20 at 00:12

1 Answers1

1

Because in general, a computer assigns a fixed amount of memory to store each number. Since everything in a computer works with binary digits, powers of two are easily representable whereas something like 1/3 really is not.

Therefore 1/3 on a computer is not really "the thing which multiplies by 3 to give 1" like it is in maths, it's something more like 0.3333333333 (or close to that). Therefore 3 times that may well be 0.99999999 and not 1.

Look up how the C-programming language stores floats for example if you would like to learn more about how this is actually represented. It's somewhat involved, but the principle is that only a certain amount of space is assigned to represent the digit, and therefore arbitrary precision just isn't possible.

mcindoe
  • 101
  • 2
  • 8