If the numbers are being input by the user in base-10, and the result of the calculation is being displayed to the user in base-10, then you should store the values in Decimal
variables rather than in Single
or Double
variables. Decimal
stores numbers in a base-10 (decimal) floating point format whereas Single
and Double
store numbers in base-2 (binary) floating point format. So, if both the input and the output are base-10, a base-10 floating point variable will give you more expected results.
' Outputs "0.0799999999999996" because the literals are Double by default
Console.WriteLine(3.28 - 3.2)
' Outputs "0.08" because the D suffix forces the compiler to interpret the literals as Decimal
Console.WriteLine(3.28D - 3.2D) ' Outputs
Floating points are always imprecise, since it's impossible to represent a value accurately in a digital format when it is a repeating decimal--at some point you have to stop the repetition and round it off. The problem is, which numbers infinitely repeat is different depending which base you are using, so by switching between bases, you can cause the value to be rounded in ways that you weren't necessarily expecting. So, for instance, even with Decimal
, you can still get imprecise results, but at least they are rounded in the way that you expect them to be rounded, when doing base-10 math:
' Outputs "0.6666666666666666666666666667"
Console.WriteLine(2D / 3D)