If you try the code here:
public class AddDecimalsMain {
public static strictfp void main(String[] args) {
System.out.print("Default: ");
System.out.println(0.1 + 0.2);
System.out.print("Float: ");
System.out.println(0.1F + 0.2F);
System.out.print("Double: ");
System.out.println(0.1D + 0.2D);
}
}
you will get
Default: 0.30000000000000004
Float: 0.3
Double: 0.30000000000000004
Please do not give me the solution, I already know the solution is to use float, which is giving me the right addition, or better yet BigDecimal, I just want to understand why java's float arithmetic is more "accurate" than its default or double arithmetic in this very particular case.
Also if I change 0.2 to 0.1, or even 0.3, I seem to get the correct answer in all cases.
So if I add 0.1 + 0.1, I will correctly get 0.2 in all cases , I noticed this weird thing happens only when I add 0.1 + 0.2
I even tried strictfp in the method thinking this might make things accurate but it did not, again I'm trying to understand what is going on around here, not how to solve it.
I'm as confused as a boyscout.