Possible Duplicate:
Why not use Double or Float to represent currency?
I'm writing a basic command-line program in Java for my high school course. We're only working with variables right now. It's used to calculate the amount of bills and coins of whatever type in your change after a purchase. This is my program:
class Assign2c {
public static void main(String[] args) {
double cost = 10.990;
int paid = 20;
double change = paid - cost;
int five, toonie, loonies, quarter, dime, nickel, penny;
five = (int)(change / 5.0);
change -= five * 5.0;
toonie = (int)(change / 2.0);
change -= toonie * 2.0;
loonies = (int)change;
change -= loonies;
quarter = (int)(change / 0.25);
change -= quarter * 0.25;
dime = (int)(change / 0.1);
change -= dime * 0.1;
nickel = (int)(change / 0.05);
change -= nickel * 0.05;
penny = (int)(change * 100);
change -= penny * 0.01;
System.out.println("$5 :" + five);
System.out.println("$2 :" + toonie);
System.out.println("$1 :" + loonies);
System.out.println("$0.25:" + quarter);
System.out.println("$0.10:" + dime);
System.out.println("$0.05:" + nickel);
System.out.println("$0.01:" + penny);
}
}
It should all work but at the last step when there's $0.01 leftover, number of pennies should be 1 but instead, it's 0. After a few minutes of stepping into the code and outputting the change value to the console, I've found out that at the last step when change = 0.01, it changes to 0.009999999999999787. Why is this happening?