-2

I found the result of double/double is not correct:

double i = 3.3, j = 1.1;
int k = i/j;
printf("%d\n", k);

the result is 2, why??
Debugging:

enter image description here

But:

float i = 3.3, j = 1.1;
int k = (int)(i/j);
printf("%d\n", k);

Debugging:

enter image description here

Al2O3
  • 3,103
  • 4
  • 26
  • 52

4 Answers4

2

3.2999 / 1.10 gives 2.99 and something which on converting to integer yields the output 2

zeeshan mughal
  • 426
  • 4
  • 16
2

A double cannot represent 3.3, even if you write 3.3 in your code the double will store it as 3.2999999999999998. (It cannot store 1.1 exact either, it will be 1.1000000000000001)

So i/j is performing 3.2999999999999998/1.1000000000000001 whose result will be stored as 2.9999999999999996

converting a double to an int truncates the value, it does not round it to the nearest integer, so 2.9999999999999996 will be converted to 2

This applies just as well to C with IEEE 754 floating points, and have many further resources.

Community
  • 1
  • 1
nos
  • 223,662
  • 58
  • 417
  • 506
0
double i = 3.3, j = 1.1;
int k = i/j;

because of the way decimal numbers are represented in memory, the result of i/j is actually 2.999999...the compiler will convert the resutl i/j from double to int because k is a int. And that's how you end up with a result of 2

Pandrei
  • 4,843
  • 3
  • 27
  • 44
  • 1
    I think he knows the compiler will convert it to `int`. The question is why the resulting integer is 2 and not 3, because 3.3/1.1 = 3. – Filipe Gonçalves Jan 24 '14 at 10:03
0

As you know that 3.3 can't be represented in binary format correctly and same for 1.1, evaluating k = i/j results in

3.2999999999999998 / 1.1000000000000001  < 3

Assigning this to int type k truncate it to 2.

haccks
  • 104,019
  • 25
  • 176
  • 264