I wanted to understand the issues with accuracy while storing 'currency' as float. I could understand the theory (to a good extend), but wanted a concrete example so that I can demonstrate to my colleagues
I tried the following examples
1)C# port from the example in medium
static void Main(string[] args)
{
double total = 0.2;
for (int i = 0; i < 100; i++)
{
total += 0.2;
}
Console.WriteLine("total = " + total); //Output is exactly 20.2 in both debug and run (release config) mode
Console.ReadLine();
}
2)John Skeet's example from C# in depth
using System;
class Test
{
static float f;
static void Main(string[] args)
{
f = Sum (0.1f, 0.2f);
float g = Sum (0.1f, 0.2f);
Console.WriteLine (f==g); //Output is true always for run and debug (release mode)
}
static float Sum (float f1, float f2)
{
return f1+f2;
}
}
Examples were run on .NET Framework 4.7.2 on Windows 11 OS. But as you see in the comments near the Console.WriteLine, I couldn't reproduce the issues with float datatype. What am I missing here?
Can I get some concrete examples to prove the theory in .NET?