While a float
only has ~7 significant decimal digits, that's not the problem you are running up against here; 0.00000217
has only three significant digits, after all.
You are using the %f
format specifier which is inherited from C and defined thus (7.21.6 Formatted input/output functions):
A double argument representing a floating-point number is converted to decimal notation in the style [−]ddd.ddd, where the number of digits after the decimal-point character is equal to the precision specification. If the precision is missing, it is taken as 6; if the precision is zero and the # flag is not specified, no decimal-point character appears. If a decimal-point character appears, at least one digit appears before it. The value is rounded to the appropriate number of digits.
Using double
won't change this; instead, you need to change your format specifier. You can use %e
or %g
if you don't mind scientific notation, but another alternative would be to use a precision specifier: %.10f
, for example, will print ten decimal digits.