0

I am facing an issue while dividing a double with an int. Code snippet is :

  double db = 10;
  int fac = 100;
  double res = db / fac;

The value of res is 0.10000000000000001 instead of 0.10.

Does anyone know what is the reason for this? I am using cc to compile the code.

unwind
  • 391,730
  • 64
  • 469
  • 606
hardcoder
  • 21
  • 1
  • 1
  • 1
  • 10
    Duplicate of MANY questions. See "floating-accuracy" tag. – dan04 Jun 16 '10 at 06:34
  • 2
    possible duplicate of [Why does 99.99 / 100 = 0.9998999999999999](http://stackoverflow.com/questions/2930314/why-does-99-99-100-0-9998999999999999) – abelenky Jun 16 '10 at 06:52

3 Answers3

10

You need to read the classic paper What Every Computer Scientist Should Know About Floating-Point Arithmetic.

alanc
  • 4,102
  • 21
  • 24
sml
  • 2,160
  • 13
  • 12
4

The CPU uses binary representation of numbers. Your result cannot be represented exactly in binary. 0.1 in binary is 0.00011001100110011... CPU truncates it at a certain point and gets some rounding error.

Rotsor
  • 13,655
  • 6
  • 43
  • 57
0

double is a floating point operator, they do not provide precise values. Look up Precision and Floating Point Operators on google.

Meiscooldude
  • 3,671
  • 5
  • 27
  • 30