3

I noticed that printf("%#g\n", 0.0) gives a different output with any gcc/clang version versus Visual Studio 2019 on Windows 7 (latest as of today).

gcc/clang give 0.00000 (6 total digits, 5 after the .) while VS gives 0.000000 (7 toal, 6 after the .). Similarly, "%#.8g" gives 8 digits in total with gcc/clang vs 9 digits in total with VS.

Questions:

  • What does the standard say about this? Is one of the compilers/standard libraries buggy?
  • I only see this behaviour from VS locally (Windows 7), but not on Azure Pipelines (recent Windows Server). Which specific compiler versions / standard library versions / OSs are affected?
  • Is there a way to get a consistent output across compilers?
Szabolcs
  • 24,728
  • 9
  • 85
  • 174
  • Does this answer your question? [huge printf float/double difference in integer digits on windows/linux](https://stackoverflow.com/questions/30935634/huge-printf-float-double-difference-in-integer-digits-on-windows-linux) – Mark Benningfield Jan 26 '21 at 11:53
  • @MarkBenningfield: No, that is not a duplicate; that one addresses the values printed, not the number of digits printed. – Eric Postpischil Jan 26 '21 at 12:24
  • @EricPostpischil: Agreed, but it does address that the implementations are different, and one of the answers provides several workarounds for consistency on both platforms. – Mark Benningfield Jan 26 '21 at 12:26
  • Perhaps an alternative as simply as `if(x) printf("%#g\n", x); else printf("%#.6g\n", x);` or the like. – chux - Reinstate Monica Jan 26 '21 at 14:43

2 Answers2

7

It is a bug

The C implementation you are using in Visual Studio is defective. Citations below are from C 2018. The relevant text is effectively the same in the 2011 and 1999 standards (in 1999, the inequalities are described with text instead of with mathematical notations using “>” and “≥”).

First, in this case, the # means a decimal-point character will be produced and trailing zeros will not be removed. It has no effect on the number of digits that should be produced before trailing zeros would be removed. 7.21.6.1 6 says “… For a, A, e, E, f, F, g, and G conversions, the result of converting a floating-point number always contains a decimal-point character, even if no digits follow it… For g and G conversions, trailing zeros are not removed from the result…” This nullifies the part of the g specification that says “… unless the # flag is used, any trailing zeros are removed from the fractional portion of the result and the decimal-point character is removed if there is no fractional portion remaining.”

Second, for the value zero, the rules for the g format say that the f format is used. This is because the rules for g (or G) depend on the exponent that the e format would use and the precision requested:

  • For e, 7.21.6.1 8 says “… If the value is zero, the exponent is zero.”
  • For g, it says “… Let P equal the precision if nonzero, 6 if the precision is omitted, or 1 if the precision is zero…” So P is 6 or 8 in the %#g or %#.8g given in the question.
  • The text continues “… if P > X ≥ −4, the conversion is with style f (or F) and precision P − (X + 1).”

So the conversion for %#g or %#.8g is done with style f using precision 6−(0+1) = 5 or 8−(0+1) = 7, respectively.

Third, for f, 7.21.6.1 8 says “A double argument representing a floating-point number is converted to decimal notation in the style [-]ddd.ddd, where the number of digits after the decimal-point character is equal to the precision specification…” Thus, 5 or 7 digits should be printed after the decimal point, respectively.

So, for %#g, “0.00000” conforms to the C standard and “0.000000” does not. And for, %#.8g, eight digits total (seven after the decimal point) conforms, and nine digits (eight after) does not.

Since you tagged this with visual-c++, I will note that the C++ standard adopts the C specification for printf. C 2017 draft N4659 20.2 says “The C++ standard library also makes available the facilities of the C standard library, suitably adjusted to ensure static type safety.”

Compensating for the bug

The bug is probably in the C/C++ library, not the compiler, so adjusting the source code by using #if to test the value of Microsoft’s macro _MSC_VER, for example, is likely not a good solution. (In particular, after compilation, a compiler might be run with a later library.)

One might test the library during program start-up. After defining int PrecisionAdjustment; with external scope, this code could be used to initialize it:

{
    /*  The following tests for a Microsoft bug in which a "%#g" conversion
        produces one more digit than the C standard specifies.  According to
        the standard, formatting zero with "%#.1g" should produce "0.", but
        Microsoft software has been observed to produce "0.0".  If the bug
        appears to be present, PrecisionAdjustment is set -1.  Otherwise,
        it is 0.  This can then be used to select which format string to
        use or to adjust a dynamic precision given with "*" such as:

            printf("%#.*g", 6+PrecisionAdjustment, value);
    */
    char buffer[4];
    snprintf(buffer, sizeof buffer, "%#.1g", 0.);
    PrecisionAdjustment = buffer[2] == '0' ? -1 : 0;
}

This assumes the same bug we see with precision 6 and 8 exists at precision 1. If not, appropriate adjustments could be made easily.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
  • Thanks for the answer! I am wondering if you can reproduce the issue (if you have access to MSVC). I only have a Windows 7 machine locally and I'm confused by *why* the problem doesn't appear on Azure CI. – Szabolcs Jan 26 '21 at 17:39
2

You asked if there was a way to get a consistent output across compilers. I doubt that there is a direct way, and I know how frustrating this can be. If the extra 0 digit is causing you problems and must be fixed, you are going to have to adopt some kind of workaround. In this case I can imagine three rather different approaches.

  1. If Visual Studio is only behaving wrongly when it tries to print a default number of digits, try making the number of digits explicit. That is, instead of "%#g", you might try using "%#.6g". If this gives you the same result on both platforms, you're done.

  2. If Visual Studio is always printing one more digit than it should, you could try using a different format under Visual Studio than under other platforms, using conditional compilation (#ifdef) to select which format to use. If Visual Studio is only having problems when the value being printed is 0.0, you might have to have a run-time test for that as well.

  3. Instead of printf, you could use snprintf to a temporary buffer, then use string functions (perhaps strstr) to see if the buffer wrongly ends in ".000000", and if so, truncate it by one character.

If method #1 works, that's great. Otherwise, I would strongly encourage you to use method #3, even through it is (I admit) a big, ugly nuisance. But please stay away from approach #2 if you possibly can. There are two huge problems with it: (1) Conditional compilation ends up being an even bigger and uglier nuisance. Many style guides recommend -- quite rightly -- avoiding it. (2) The even bigger problem with approach #2 is that if Microsoft ever fixes their version of printf to behave properly, your code may break at that point, perhaps without your noticing it! Approach #2 is the opposite of "future proof".

Or, in other words, "Don't enshrine the bugs you have to work around".

Steve Summit
  • 45,437
  • 7
  • 70
  • 103