0

I was recently making a standard deviation calculator for schoolwork in java, but I found a few problems with double. Because I wanted the answer as precise as possible, I used double instead of int or float, but the response was weird on few occasions. This is an example:

public class tryTest
{
    public static void main(String[] args)
    {
       double a = 0.1;
       double b = 0.01;
       double c = a-b;
       System.out.println(c);

    }
}

The response should be .09, but it returns

0.09000000000000001

Why, and how should I fix this? Am I the only one with this problem?

Edit: I do realize Math.floor(c*100)/100 would work, but I'm just confused on why double does this.

Jalex Dai
  • 33
  • 5
  • Double is floating point so you're not really using it "instead" of a float. It's a [double precision float](https://en.wikipedia.org/wiki/Double-precision_floating-point_format). – Jacob H May 29 '18 at 14:04
  • 1
    For precise decimal calculation, you can consider using BigDecimal, but before you decide on that, please look at this discussions of the pros and cons. https://stackoverflow.com/questions/3413448/double-vs-bigdecimal – Dragonthoughts May 29 '18 at 14:07
  • Keep in mind that when you are manipulating decimal number, you can't really do operations like `if(a == b)`, you need to compare two numbers by doing `if(a - b < epsilon)` where epsilon is the precision you want. – nubinub May 29 '18 at 14:10
  • @Dragonthoughts Standard deviation, the OP's actual problem, requires a division by a number related to the sample size followed by taking a square root. Neither of those operations can be done exactly in BigDecimal for arbitrary input numbers. – Patricia Shanahan May 30 '18 at 08:24

0 Answers0