1

I'm somewhat new to programming, only having taken one course on Python at university and now working my way through Harvard's CS50 OpenCourseware, so please bear with me.

This code compiles fine, no errors, etc. The program is meant to take a user input amount of change and use a simple greedy algorithm to return the fewest number of each U.S. coin it would take to represent that change. Simple enough; however, what's killing me here is that, for some reason, it doesn't always count the pennies.

If I were to enter ".41", I would get "1 Quarters, 1 Dimes, 1 Nickels, and 0 Pennies"; entering ".42" yields "1 Quarters, 1 Dimes, 1 Nickels, and 1 Pennies"; but oddly ".43" yields the correct "1 Quarters, 1 Dimes, 1 Nickels, and 3 Pennies".

It's the inconcistency of this bug that's making it so hard for me to track down. I keep on walking through the code in my head with different inputs to try to pinpoint the problem but it's been a futile effort.

What am I doing wrong?

#include <stdio.h>
#include <cs50.h>

int main(void)
{
    printf("How much change is owed? ");
    float change = GetFloat();
    int quarters = 0;
    int dimes = 0;
    int nickels = 0;
    int pennies = 0;

    float coinArray[4] = {.25, .10, .05, .01};
    int coinNames[4] = {quarters, dimes, nickels, pennies};

    int counter(float coinArray);
        {
        int x;
        for(x = 0; x < 4; x++)
            {
            while (change >= coinArray[x])
                {
                change = change - coinArray[x];
                coinNames[x]++;
                }
            }

    }
    printf("%d Quarters, %d Dimes, %d Nickels, and %d Pennies\n",
           coinNames[0], coinNames[1], coinNames[2], coinNames[3]);
}
Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
  • 5
    you're dealing with floats. `.41` is 0.409999999999345234523423423` or whatever. – Marc B Aug 25 '14 at 18:49
  • possible duplicate of [Why Are Floating Point Numbers Inaccurate?](http://stackoverflow.com/questions/21895756/why-are-floating-point-numbers-inaccurate) – Patrick Collins Aug 25 '14 at 18:51
  • 2
    And the fix is to compute in whole pennies. – Peter - Reinstate Monica Aug 25 '14 at 18:51
  • Along with the given answer about floating point inaccuracies, there is also this line: int coinNames[4] = {quarters, dimes, nickels, pennies}; which is defined as an array of 4 integers. What it should be is: int *coinNames[] = {&quarters, &dimes, &nickles, &pennies}; and this: coinNames[x]++; should be (*coinNames[x])++; – user3629249 Aug 26 '14 at 03:30

2 Answers2

3

It's the simple fact that your computer simply cannot represent 0.2 accurately. Mathematically it's just 1/5, but your computer can't represent that as a float in the same way that you can't represent 1/3 as a decimal number. 0.41 is just 41/100 which is a minimal fraction and has two factors of five in the denominator - no chance to represent as a float.

The only way to handle currency accurately, is to calculate in terms of cents, not dollars. That way you use integers (int, or better long long instead of float), and never get roundoff errors.

cmaster - reinstate monica
  • 38,891
  • 9
  • 62
  • 106
  • Note for OP: the word for this sort of a decimal scheme is "fixed point" (as opposed to floating point). Integral values have a small range and perfect precision. Floating point ones have a large range and low precision. Fixed point ones have an even smaller range, but perfect precision for arbitrary fractions. – Wug Aug 25 '14 at 19:07
  • "The only way to handle currency accurately in terms of cents, not dollars." is too sweeping. Certainly integer math is good for learners. But calculating interest rates, tax tables, mortgage payments all invoke FP math. Good financial program can be had with binary FP, but the pitfalls are many. Great financial program use decimal [FP math](http://en.wikipedia.org/wiki/Decimal64). – chux - Reinstate Monica Aug 25 '14 at 19:14
  • @chux Well, to be precise: You only need to multiply the dollar amount by 25 (or any sane multiple, 100 for instance), then you can use floating point arithmetic without loss of precision. As a matter of fact, I didn't say that you can't use floating point datatypes on purpose: integers are perfectly representable in floats (up to 24 bit for `float`, 53 bit for `double`). As to those financial programs: they have to deal with the fact that laws specify rounding on decimal places. And it's those laws that are the spec for those programs, they need to be followed whether it makes sense or not. – cmaster - reinstate monica Aug 26 '14 at 20:28
  • Note: I would say a typical `float` could represent a 25 bit `signed int`, such as `int25_t`. – chux - Reinstate Monica Aug 26 '14 at 20:55
  • @chux IEEE float is 1 sign bit, 8 exponent bit, and 23 mantissa bits. This allows a precision of 24 bits: the 23 mantissa bits + one hidden bit. The first unrepresentable integer is `2^24 + 1`. – cmaster - reinstate monica Aug 27 '14 at 15:15
  • A 2's complement `int25_t` would have an expected range of `-2^24` to `2^24 - 1` or `2^25` different numbers. `2^24 + 1` is also not in an `int25_t` range. `int25_t` maps within the integer range of [binary32](http://en.wikipedia.org/wiki/Single_precision_floating-point_format) – chux - Reinstate Monica Aug 27 '14 at 15:24
  • @ Ah, yes, now I see what you mean. Yes, you are absolutely right. I completely forgot the sign bit... – cmaster - reinstate monica Aug 27 '14 at 15:48
0

If code stays with binary FP, all computation must consider the results of operations like + - * / >= == may be close and not exact as compared to doing the same mentally.

Changing change >= coinArray[x] to (change - coinArray[x]) > 0.005 likely will do the trick here.

Recommend though to use integer math with simple money programming tasks. Count the cents rather than dollars.

FP math with money, though do-able, is more advanced. Further, decimal FP is available on some platforms.

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256