I was recently making a standard deviation calculator for schoolwork in java, but I found a few problems with double. Because I wanted the answer as precise as possible, I used double instead of int or float, but the response was weird on few occasions. This is an example:
public class tryTest
{
public static void main(String[] args)
{
double a = 0.1;
double b = 0.01;
double c = a-b;
System.out.println(c);
}
}
The response should be .09, but it returns
0.09000000000000001
Why, and how should I fix this? Am I the only one with this problem?
Edit: I do realize Math.floor(c*100)/100 would work, but I'm just confused on why double does this.