Problem solved, see the comment thread for explanation. Basically, what I was printing was NOT the actual value but an approximation. I wasn't aware of this behavior as I'm new to C, but upon printing the number with more decimal places it turns out it is not 2.000000 but really 1.99999999999990407673. I'll go put on my dunce hat and sit in the corner and read K&R for the rest of the day now...
I'm at a loss for the behavior I'm seeing in this C program. I'm trying to build up a decimal-to-fraction converter, and I knew there would be some odd things to look out for when dealing with floating-point numbers, but this one doesn't seem to fit the mold of other questions I've browsed on Stack Overflow and through Google.
So, here is what is happening. I pass the number 3.642 to my function dec_to_frac. Here is the relevant portion from main():
void dec_to_frac(double dec);
int main() {
double dec_1 = 3.642;
dec_to_frac(dec_1);
I'm going about converting the decimal to a fraction by first truncating the whole number part using the fact that casts from double to int truncate the fractional part of a double.
void dec_to_frac(double dec) {
/*
* Uses the fact that assinging a double to int truncates the decimal portion. So,
* if 3.64 is the dec, whole will hold 3
*/
int whole = dec;
Then, I get the decimal portion of the number by subtracting the whole from the decimal passed in.
/*
* By subtracting the whole number portion from the decimal, we get the leftover
* decimal portion. Ex. if dec is 3.64, whole is 3, so decimal holds 0.64.
*/
double decimal = dec - whole;
All of that works fine. Where the problem arises is as I loop through the decimal portion, cutting off each decimal digit each loop by multiplying it by 10. Here is the entire loop:
int i, whole_dec, temp;
whole_dec = 0;
// loops until decimal is less than the PRECISION defined at the top of the program
for (i = 0; decimal > PRECISION; i++) {
/*
* Moves the decimal place to the right. Ex. 0.64 becomes 6.4
*/
decimal *= 10;
/*
* temp is an int, so this should remove the whole number portion of the decimal.
* Ex. if decimal is 6.4, temp will be 6.
*/
temp = decimal;
/*
* whole_dec holds the whole number representation of the decimal thus far.
* Ex. 0.64 is decimal at top of loop, becomes 6.4 after decimal *= 10, temp is 6, so 6
* is added to whole_dec after whole decimal is multiplied by a power of 10.
*/
whole_dec = whole_dec * 10 + temp;
/*
* temp holds the whole number portion of decimal, so this cuts off the whole number portion
* leaving only the decimal portion. Ex. if decimal is 6.4, temp will be 6, so 6.4 - 6 = 0.4, the
* decimal portion for the next loop.
*/
decimal -= temp;
}
Now, all goes according to plan until I reach the last digit of 3.642. Here are some print statements showing the progress of the for loop:
TOP LOOP
Before [decimal *= 10], decimal = 0.642000
After [decimal *= 10], decimal = 6.420000
After [temp = decimal], temp = 6
Before [decimal -= temp], decimal = 6.420000
temp = 6
After [decimal -= temp], decimal = 0.420000
TOP LOOP
Before [decimal *= 10], decimal = 0.420000
After [decimal *= 10], decimal = 4.200000
After [temp = decimal], temp = 4
Before [decimal -= temp], decimal = 4.200000
temp = 4
After [decimal -= temp], decimal = 0.200000
TOP LOOP
Before [decimal *= 10], decimal = 0.200000
After [decimal *= 10], decimal = 2.000000
After [temp = decimal], temp = 1
Before [decimal -= temp], decimal = 2.000000
temp = 1
After [decimal -= temp], decimal = 1.000000
Notice in the very last print block, that the decimal value is 2.000000 but when the following assignment occurs, temp is 1:
temp = decimal;
What is going on here? I tested this in main with constant values and the result was as expected:
int main() {
double dec = 2.000000;
int temp = dec;
printf("%d\n", temp); // temp is 2 here, which I would expect
I can't understand why 2.000000 when assigned to int within my for loop produces a 1, but in main produces a 2! What am I missing?