I'm working on a method that translates a string into an appropriate Number
type, depending upon the format of the number. If the number appears to be a floating point value, then I need to return the smallest type I can use without sacrificing precision (Float
, Double
or BigDecimal
).
Based on How many significant digits have floats and doubles in java? (and other resources), I've learned than Float
values have 23 bits for the mantissa. Based on this, I used the following method to return the bit length for a given value:
private static int getBitLengthOfSignificand(String integerPart,
String fractionalPart) {
return new BigInteger(integerPart + fractionalPart).bitLength();
}
If the result of this test is below 24, I return a Float
. If below 53 I return a Double
, otherwise a BigDecimal
.
However, I'm confused by the result when I consider Float.MAX_VALUE
, which is 3.4028235E38
. The bit length of the significand is 26 according to my method (where integerPart = 3
and fractionalPart = 4028235
. This triggers my method to return a Double
, when clearly Float
would suffice.
Can someone highlight the flaw in my thinking or implementation? Another idea I had was to convert the string to a BigDecimal
and scale down using floatValue()
and doubleValue()
, testing for overflow (which is represented by infinite values). But that loses precision, so isn't appropriate for me.