There are at least three problems here, two of which should be easily noticeable if you enable compiler warnings (-Wall
command-line option), and which lead to undefined behavior.
One is wrong format specifier in your printf
statement. You're printing a floating point value wirh %d
, the format specifier for signed integer. The correct specifier is %f
.
The other is using uninitialized value. The variable Total
is potentially uninitialized if the if
statement in your function isn't gone through, and the behavior of such usage is undefined.
From my point of view, it's likely the wrong format specifier that caused the wrong output. But it's also recommended that you fix the second problem described above.
The third problem has to do with floating point precision. Casting values between float and double may not be a safe round-trip operation.
Your 3.0
double constant is cast to float when passed to calculateCharges()
. That value is then cast up to a double in the timeIn <= 3.0
comparison (to match the type of 3.0
).
It's probably okay with a value like 3.0
but it's not safe in the general case. See, for example, this piece of code which exhibits the problem.
#include <stdio.h>
#define EPI 2.71828182846314159265359
void checkDouble(double x) {
printf("double %s\n", (x == EPI) ? "okay" : "bad");
}
void checkFloat(float x) {
printf("float %s\n", (x == EPI) ? "okay" : "bad");
}
int main(void) {
checkFloat(EPI);
checkDouble(EPI);
return 0;
}
You can see from the output that treating it as double
always is okay but not so when you cast to float
and lose precision:
float bad
double okay
Of course, the problem goes away if you ensure you always use and check against the correct constant types, such as by using 3.0F
.