A float
holds some "hidden" precision that is not shown. Try watching invoice.total.ToString("R")
, and you will probably see that it is not exactly 36000
.
Alternatively, this can be a result of your runtime choosing a "broader" storage location, like a 64-bit or 80-bit CPU register or similar, for the intermediate result invoice.total * 0.08f
.
EDIT: You can throw away the effects arising from the runtime choosing a too wide storage location, by changing
(int)(invoice.total * 0.08f)
into
(int)(float)(invoice.total * 0.08f)
The extra cast, from float
to float
(sic!), looks like a no-op, but it does force the runtime to round and throw away that unwanted precision. This is poorly documented. [Will provide reference.] A related thread you might want to read: Are floating-point numbers consistent in C#? Can they be?
Your example is actually archetypical, so I have decided to go a bit more into detail. This stuff is well described in the section Differences Among IEEE 754 Implementations which is written as an addendum (by an anonymous author) to David Goldberg's What Every Computer Scientist Should Know About Floating-Point Arithmetic. So suppose we have this code:
static int SO_24548957_I()
{
float t = 36000f; // exactly representable
float r = 0.08f; // this is not representable, rounded
float temporary = t * r;
int v = (int)temporary;
return v; // always(?) 2880
}
Everything seems fine, but we decide to refactor the temporary variable away, so we write:
static int SO_24548957_II()
{
float t = 36000f; // exactly representable
float r = 0.08f; // this is not representable, rounded
int v = (int)(t * r);
return v; // could be 2880 or 2879 depending on strange things
}
and Bang! the behavior of our program changes. You can see the change on most systems (at least on mine!) if you compile for platform x86
(or Any CPU
with Prefer 32-bit
selected). Optimizations or not (Release or Debug mode) could be relevant in theory, and the hardware architecture is certainly important too.
It is a complete surprise to many that both 2880 and 2879 can be correct answers on IEEE-754-compliant systems, but read the link I gave.
To elaborate over what is meant by "not representable", let us see what the C# compiler must do when it encounters the symbol 0.08f
. Beacuse of the way float
(32-bit binary floating point) works, we will have to choose between:
10737418 / 2**27 == 0.079 999 998 2...
and
10737419 / 2**27 == 0.080 000 005 6...
where **
means exponentiation (i.e. "to the power of"). Since the first one is nearer to the desired mathematical value, we must choose that one. So the actual value is a bit smaller than the desired one. Now when we do the multiplication and want to store in a Single
again we must also, as a part of the multiplication algorithm, round again to yield the product representation which is closest to the exact "mathematical" product of the (actual) factors 36000
and 0.0799999982...
. In this case you are lucky that the nearest Single
is actually 2880
exactly, so the multiplication process in our case involves a round-up to this value.
Therefore the first code example above gives 2880
.
However, in the second code example above, the multiplication might be done (at the choice of the run-time, we cannot really help that) in some CPU hardware that handles many bits (64 or 80, typically). In that case, the product of any two 32-bit floats, like ours, can be calculated without need for rounding the end result, because 64 bits or 80 bits are more than enough to hold the full product of two 32-bit floats. So clearly this product is smaller than 2880
since
0.0799999982...
is less than 0.08
.
Therefore the second method example above could return 2879
.
For comparison, this code:
static int SO_24548957_III()
{
float t = 36000f; // exactly representable
float r = 0.08f; // this is not representable, rounded
double temporary = t * (double)r;
int v = (int)temporary;
return v; // always(?) 2879
}
always give the 2879
because we explicitly tell the compile to convert the Single
to Double
which means adding a bunch of binary zeroes, so we get to the 2879
case with certainty.
Lessons learned: (1) With binary floating points, fatoring out a sub-expression to a temp variable might change the result. (2) With binary floating points, C# compiler settings like x86
vs. x64
might change the result.
Of course, as everybody says everywhere, do not use float
or double
for monetary applications; use decimal
there.