0

I wanted to understand the issues with accuracy while storing 'currency' as float. I could understand the theory (to a good extend), but wanted a concrete example so that I can demonstrate to my colleagues

I tried the following examples

1)C# port from the example in medium

static void Main(string[] args)
{
    double total = 0.2;
    for (int i = 0; i < 100; i++)
    {
        total += 0.2;
    }
    Console.WriteLine("total = " + total); //Output is exactly 20.2 in both debug and run (release config) mode
    Console.ReadLine(); 
}

2)John Skeet's example from C# in depth

using System;

class Test
{
    static float f;

    static void Main(string[] args)
    {
        f = Sum (0.1f, 0.2f);
        float g = Sum (0.1f, 0.2f);
        Console.WriteLine (f==g); //Output is true always for run and debug (release mode)
    }

    static float Sum (float f1, float f2)
    {
        return f1+f2;
    }
}

Examples were run on .NET Framework 4.7.2 on Windows 11 OS. But as you see in the comments near the Console.WriteLine, I couldn't reproduce the issues with float datatype. What am I missing here?

Can I get some concrete examples to prove the theory in .NET?

Arctic
  • 807
  • 10
  • 22
  • 2
    Re “//Output is exactly 20.2 in both debug and run (release config) mode”: No, the output was “20.2”, not 20.2. The output was a string of characters, not a numbers. That is important to keep in mind, because the actual number was 20.199999999999999289457264239899814128875732421875 or something similar. To work with floating-point, you should know that a lot of software does not show you the true value, and you need to know the true value to truly understand what is happening. The software convert that actual value to the characters “20.2” and output those. – Eric Postpischil Sep 10 '22 at 23:39
  • Re “examples to prove the theory in .NET?”: What “theory” are you trying to prove? Exact what do you want an example to show? – Eric Postpischil Sep 10 '22 at 23:41
  • @EricPostpischil Thank you. Your first comment gave me a good hint. For the theory part, I was hoping to get an example in C#/.NET to show that using 'float'/'double' to store monetary values is not a good idea. But when I tried to demonstrate to my colleague using the examples in my question, I couldn't prove him m point. But you already answered me on this with difference between "20.2" and 20.2. – Arctic Sep 11 '22 at 03:43

2 Answers2

1

As for an example, try adding up 1.0/N N times where N is 3, 7, 11, 13, etc. (Some prime, but not 2 or 5.) Print the sum with sufficient output to distinguish its value. Use hexadecimal output or usually 17 significant decimal digits.


The issue you want to demonstrate is not special to money. The same issue applies to all floating point math. Used incorrectly, FP fails for money as well FP also fails for other applications. Used correctly, FP works for money as well FP also works for other applications.

With FP and money, use 1.0 not to represent a major unit of currency, but to represent a decimal fraction of the smallest unit. float lacks precision for this scaling. Use double.

Example: This may be be 1¢ cent instead of $1, or maybe 1/100 of a cent, depending on coding requirements of the application.

Instead of compare like f==g, code typically needs to round to the nearest unit first, then compare.

If the language supports decimal floating point, this is easier than binary floating point, but either works. Binary floating point simply takes more care to perform the exacting decimal requirements of money.


Alternatively code could encode money with an integer type like long long, again not in $1, but cents or fraction of cents. Yet that to have its problems like overflow and truncated division.

See also money/currency representation.

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
  • Re “The same issue applies to all floating point math”: This not a floating-point issue. It is a fixed-width precision issue. Proof: Suppose the significand could represent any real number in [1, 2), but the format were still floating-point, being scaled by some 2^e. Then the sum you propose would get the correct result, 1. Conversely, if the significand cannot represent any real number, then there are some sums the format cannot calculate correctly, regardless of whether there is floating-point scaling or not. Therefore, it is the limited precision that is the issue, not floating-point. – Eric Postpischil Sep 11 '22 at 23:01
  • As a real-life example, Quicken uses fixed-point decimal but still showed rounding errors after a 7:1 Apple split, when it could not reconcile a sale of 100 shares with a number of pre-split shares. (No finite number of digits would have given it 14.285714… shares.) Here there was no floating-point. It is a fixed-precision significand that is a problem, not floating-point. – Eric Postpischil Sep 11 '22 at 23:01
  • @EricPostpischil Do you see C# using a floating point encoding that is not fixed-precision for normal numbers? – chux - Reinstate Monica Sep 11 '22 at 23:15
  • I am not pointing out a solution; I am pointing out where the problem is so that people will understand it and be able to analyze its behavior. E.g., if somebody is told floating-point is the problem, they might try switching to fixed-point. But that will not fix the problem. They need to understand the precision is an issue and know they need to design whatever solution will work in their situation. – Eric Postpischil Sep 11 '22 at 23:23
  • @EricPostpischil "they might try switching to fixed-point." --> Does C# support standard fixed-point non-integer types? – chux - Reinstate Monica Sep 12 '22 at 02:56
0

Here is an example:

Mac_3.2.57$cat floatFail.c
#include <stdio.h>

int main(void){
    float a = 0.1;
    float b = 1000000;

    printf("b=%100.100f\n", b);
    printf("a=%100.100f\n", a);
    printf("a+b=%100.100f\n", (b+a));
    printf("(b+a)*100%100.100f\n", (b+a)*100);
    printf("b*100=%100.100f\n", b*100);
    printf("100*a=%100.100f\n", 100*a);
    printf("(b+a)*100 - b*100=%100.100f\n", (b+a)*100 - b*100);

    return(0);
}
Mac_3.2.57$cc floatFail.c
Mac_3.2.57$./a.out 
b=1000000.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
a=0.1000000014901161193847656250000000000000000000000000000000000000000000000000000000000000000000000000
a+b=1000000.1250000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
(b+a)*100100000016.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
b*100=100000000.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
100*a=10.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
(b+a)*100 - b*100=16.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
Mac_3.2.57$
Andrew
  • 1
  • 4
  • 19
  • Thanks, but I was hoping for an answer in C#/.NET – Arctic Sep 11 '22 at 03:45
  • 2
    I gave up on MS products years ago--the principals still apply. See https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html,just above theSummary section there is a diagram that helps explain the main issues and its workaround. – Andrew Sep 11 '22 at 09:17