Floats are an approximation and cannot easily represent ALL rational values with only 4 bytes, long is 8 bytes, so you expect to lose some information when you convert the value to a type with less precision, but on top of that float
is stored differently, using base-2 notation, where as integral types like long
are stored using base-10.
You will get better results with double
or decimal
As a general rule I use decimal
for discrete values that you need to maintain their exact value to a specific number of decimal places 100% of the time, for instance for monetary values on invoices and transactions. Many other measurements are acceptable to be stored and processed using double
.
The key take-away is that double
is better for un unspecified number of decimal places, where as decimal
is suited for implementations that have a fixed number of decimal places. Both of these concepts can lead to rounding errors at different points in your logic, decimal forces you to deliberately deal with rounding up front, double
allows you to defer management of rounding until you need to display the value.
long x = 231021578;
float y = x;
double z = x;
decimal m = x;
Console.WriteLine("long: {0}", x);
Console.WriteLine("float: {0}", y);
Console.WriteLine("double: {0}", z);
Console.WriteLine("decimal: {0}", m);
Results:
long: 231021578
float: 2.310216E+08
double: 231021578
decimal: 231021578
Its out of scope for this post, but there was a healthy discussion related to this 10 years ago: Why is the data stored in a Float datatype considered to be an approximate value?