so I have this code snippet here:
int main(int argc, char *argv[])
{
double cost;
sscanf(argv[1], "%lf", &cost);
double money_given;
sscanf(argv[2], "%lf", &money_given);
if(money_given < cost)
{
printf("Not enough money to cover the costs\n");
return -1;
}
float change = money_given - cost;
int quarters = change/.25;
int dimes = (change -0.25*quarters)/.10;
int nickels = (change -0.25*quarters - 0.1*dimes)/0.05;
int pennies = (change -0.25*quarters - 0.1*dimes - 0.05*nickels)/0.01;
printf("%f - 0.25*%d - 0.1*%d - 0.05 * %d\n", change, quarters, dimes, nickels);
printf("%d = pennies\n",pennies);
printf("Cost: %f, Money given: %f. \n Change: %f or \n %d quarters, %d dimes, %d nickels,"
" %d pennies\n", cost, money_given, change, quarters, dimes, nickels, pennies);
return 0;
}
So, in the line:
int pennies = (change -0.25*quarters - 0.1*dimes - 0.05*nickels)/0.01;
If I declare the change variable as a variable and the change is 0.46 for example, then the output is: 1 quarter, 2 dimes, 0 nickels, 0 pennies, which is wrong. It should be 1 quarter, 2 dimes, 0 nickels, 1 penny.
I get the right answer when I declare the variable as a float. Why is that? Is there a difference in the arithmetics when I use a double instead of a float?