I am going through a C++ course and am asked to make a simple cashier program with change in dollars and cents, separately.
In doing so, I came across an instance where if I cout
my calculation of cout<<change*100 - dollars*100<<endl;
I get 40 cents correctly.
But when I set the int cents = change * 100 - dollars * 100;
Then cout<<cents<<endl;
I get 39.
Is this because of the data types I am using have some unintended consequences that I'm not aware of? Here is the whole program:
#include <iostream>
using namespace std;
int main()
{
double price, paymentAmount, change;
int dollars, cents;
price = 23.00;
paymentAmount = 24.40;
cout<<"total: "<<price<<endl;
cout<<"paid: "<<paymentAmount<<endl;
change = paymentAmount-price;
dollars = change; // implicit conversion from double -> int
cents = change * 100 - dollars * 100;
cout<<"dollars: "<<dollars<<endl;
cout<<"cents: "<<change*100 - dollars*100<<endl; // outputs 40
cout<<"cents: "<<cents<<endl; // outputs 39
return 0;
}
Thanks for the help
Edit:
Turns out this is from converting the double to int, which truncates the decimals. My number was something like 39.99999 but was truncated to 39 when converting to int. I think that conversion aspect makes this a non duplicate question