C is hiding the problem not removing it
The idea that the C program is more accurate is a misunderstanding of what is happening. Both have imprecise answers but then by default C has rounded to 6 significant figures. Hiding the tiny error. Were you to actually use the value in a calculation (for example num==1
) you'd find that both are inaccurate.
Usually it doesn't matter for well written programs
In general intelligently written programs can cope with this tiny error without difficulty. For example your program can be rewritten to recreate a double each time
double i, num=0.0;
for(i=0;i<10;i++)
num=0.1*i;
System.out.println(num);
}
Meaning that the error does not grow. Additionally you should never use == with doubles as the tiny inaccuracy can be visible there.
In the very very occasional occurrences where this tiny error is a problem (currency programs being the most common); bigDecimal can be used.
This isn't a problem with floating point numbers but with the conversion from base 10 to base 2.
There are fractions in base 10 that cannot be expressed exactly; for example 1/3. Similarly there are fractions that cannot be expressed exactly within binary for example 1/10. It is from this perspective that you should look at this
The problem in this case was that when you wrote "0.1", a base 10 number the computer had to convert that to a binary number, 0.1=(binary)0.00011001100110011001100110011001.......
forever but because it couldn't exactly represent that in binary with the space it had it ended up as (binary)0.000110011001100
1. A binary friendly number (such as 1/2) would be completely accurate until the double ran out of precision digits (at which point even binary friendly numbers couldn't be exactly represented)
1 number of decimal places not accurate