Can anyone explain why round off errors occur in general? Some explanations online have gone over my head on the internet and I was wondering if anyone had an easy to understand explanation. I have run across this problem occasionally when programming and I have always been stumped as to why this would occur in a computer system. One thing I've noticed is that it occurs when certain numbers are evaluated with certain operations (Ex: multiplication, division, etc...).
For example, sometimes a mathematical calculation would return a number such as 14.000000001 or 45.99999999 instead of the expected values (14 and 46)
Could it be that integer overflow is caused by actions of the programming language in use or some underlying factor in computer systems?