It is a bug
The C implementation you are using in Visual Studio is defective. Citations below are from C 2018. The relevant text is effectively the same in the 2011 and 1999 standards (in 1999, the inequalities are described with text instead of with mathematical notations using “>” and “≥”).
First, in this case, the #
means a decimal-point character will be produced and trailing zeros will not be removed. It has no effect on the number of digits that should be produced before trailing zeros would be removed. 7.21.6.1 6 says “… For a
, A
, e
, E
, f
, F
, g
, and G
conversions, the result of converting a floating-point number always contains a decimal-point character, even if no digits follow it… For g
and G
conversions, trailing zeros are not removed from the result…” This nullifies the part of the g
specification that says “… unless the #
flag is used, any trailing zeros are removed from the fractional portion of the result and the decimal-point character is removed if there is no fractional portion remaining.”
Second, for the value zero, the rules for the g
format say that the f
format is used. This is because the rules for g
(or G
) depend on the exponent that the e
format would use and the precision requested:
- For
e
, 7.21.6.1 8 says “… If the value is zero, the exponent is zero.”
- For
g
, it says “… Let P equal the precision if nonzero, 6 if the precision is omitted, or 1 if the precision is zero…” So P is 6 or 8 in the %#g
or %#.8g
given in the question.
- The text continues “… if P > X ≥ −4, the conversion is with style
f
(or F
) and precision P − (X + 1).”
So the conversion for %#g
or %#.8g
is done with style f
using precision 6−(0+1) = 5 or 8−(0+1) = 7, respectively.
Third, for f
, 7.21.6.1 8 says “A double
argument representing a floating-point number is converted to decimal notation in the style [-]ddd.ddd, where the number of digits after the decimal-point character is equal to the precision specification…” Thus, 5 or 7 digits should be printed after the decimal point, respectively.
So, for %#g
, “0.00000” conforms to the C standard and “0.000000” does not. And for, %#.8g
, eight digits total (seven after the decimal point) conforms, and nine digits (eight after) does not.
Since you tagged this with visual-c++, I will note that the C++ standard adopts the C specification for printf
. C 2017 draft N4659 20.2 says “The C++ standard library also makes available the facilities of the C standard library, suitably adjusted to ensure static type safety.”
Compensating for the bug
The bug is probably in the C/C++ library, not the compiler, so adjusting the source code by using #if
to test the value of Microsoft’s macro _MSC_VER
, for example, is likely not a good solution. (In particular, after compilation, a compiler might be run with a later library.)
One might test the library during program start-up. After defining int PrecisionAdjustment;
with external scope, this code could be used to initialize it:
{
/* The following tests for a Microsoft bug in which a "%#g" conversion
produces one more digit than the C standard specifies. According to
the standard, formatting zero with "%#.1g" should produce "0.", but
Microsoft software has been observed to produce "0.0". If the bug
appears to be present, PrecisionAdjustment is set -1. Otherwise,
it is 0. This can then be used to select which format string to
use or to adjust a dynamic precision given with "*" such as:
printf("%#.*g", 6+PrecisionAdjustment, value);
*/
char buffer[4];
snprintf(buffer, sizeof buffer, "%#.1g", 0.);
PrecisionAdjustment = buffer[2] == '0' ? -1 : 0;
}
This assumes the same bug we see with precision 6 and 8 exists at precision 1. If not, appropriate adjustments could be made easily.