During which of the following conversions might there be a loss of precision?
a. Converting an int to a float
int
has 32 bits of precision, used entirely for the non-fractional portion of a number. float
has 32 bits of precision, but some of those bits are used for the exponent, leaving fewer than 32 bits to represent the mantissa. Thus, there are numbers that can be represented as int
but which cannot as float
.
For example:
int i = int.MaxValue; // = 2147483647
float f = (float)i; // = 2147484000 --> lost last three digits!
b. Converting a float to a long
I hope it's easy for you to see how conversions from floating point to integer types can lose precision.
c. Converting a long to an int
That your book did not mention this as a problem indicates that it's distinguishing loss of precision from loss of accuracy (just FYI).
In C#, long
is 64 bits while int
is only 32 bits. Any number larger than int.MaxValue
(i.e. 2147483647
) cannot be represented by int
, but many such numbers can be represented by long
. Converting any such number from long
to int
will result in a loss of accuracy, i.e. the resulting number is not even close to the correct number.
Arguably it has also lost precision, but I guess your book is focusing on situations where only precision is lost?
d. Converting an int to a long
I assume you understand why this doesn't lose precision (or accuracy).
e. Converting a float to a decimal
f. Converting a decimal to a float
Why are both float to decimal and decimal to float a possible loss of precision conversion..?
decimal
to float
is a little easier to explain: the reason we even use decimal
in the first place is because the normal float
numeric type simply can't represent certain values that decimal
can and which are important for common types of calculations (e.g. monetary computations), and for other values decimal
also can't represent it can at least represent them more closely.
For example:
float f = 1f / 3;
decimal d = 1M / 3;
float f2 = (float)d;
In the above, the variable f
will have the value 0.333333343
, while the variable d
will have the value 0.3333333333333333333333333333
. Neither are exact (since 1/3 is a repeating decimal, it can't be represented by a finite computer), but the decimal
version is a lot closer to the correct answer.
Similarly, converting from decimal
back to float
results in a loss of precision, as the variable f2
winds up with the same value that f
did: 0.333333343
.
But what about conversion from float
to decimal
?
Well, while decimal
has more significant digits of precision than float
does (about 28 vs 7), the float
type supports numbers with magnitude as small as 1.401298E-45
(i.e. float.Epsilon
). But decimal
can't represent numbers with such small magnitude (it can only represent magnitudes as small as about 1E-28
). Converting float
numbers with magnitudes smaller than what decimal
can represent will result in a value of 0
, losing those significant decimal digits.
Put another way: while float
only has 7 digits of precision and decimal
has 28, float
can move those digits much farther to the right of the decimal point than decimal
can, allowing for smaller-magnitude numbers in float
than decimal
can represent.
Thus, it is possible to lose precision going either way (and that's why there aren't implicit conversions in C# between those types…it offers implicit conversions only when there's no loss of information).
See also:
Explicit Numeric Conversions Table (C# Reference)
Single Structure
Decimal Structure
Difference between Decimal, Float and Double in .NET?