I am of the opinion that the problem is ill posed. Specifically, this part:
first find how many [decimal] place values there are
The number of decimal digits is a matter of representation, not an intrinsic property of the number. When writing double a = 0.3;
what gets stored into variable a
is the double precision value which is closest to the exact decimal 0.3. That value is close to 0.3, but not identical to 0.3 simply because IEEE-754 is binary based and 0.3
is a non-terminating binary fraction. But, once assigned, variable a
has no memory of where it came from, or whether the source code had it as double a = 0.3;
vs. double a = 0.29999999999999999;
.
Consider the following snippet:
double a = 0.3;
double b = 0.2999999999999999;
double c = 0.29999999999999999;
Console.WriteLine("a = {0}, b = {1}, c = {2}, a==b = {3}, b==c = {4}, a==c = {5}", a, b, c, a==b, b==c, a==c);
The output is:
a = 0.3, b = 0.3, c = 0.3, a==b = False, b==c = False, a==c = True
What this shows is that variables a
and c
compare equal i.e. they hold the exact same value, yet one was defined with 1 decimal digit while the other one with 17 decimal digits. Point being that it does not make sense to speak of the number of decimal places associated with a floating point value because, as this example shows, the same value can have different representations with different numbers of decimal places..
As a side comment, the above also shows that b
and c
are different values, even though they differ only in the 17th decimal position. This is consistent with the double
type having between 15 and 17 significant decimal digits, which is why the 16th and 17th digits cannot be ignored in general.