3

Consider the following snippet1 (which is testable here):

#include <fmt/core.h>
#include <iomanip>
#include <iostream>
#include <sstream>
#include <string>

// Let's see how many digits we can print
void test(auto value, char const* fmt_str, auto std_manip, int precision)
{
    std::ostringstream oss;
    oss << std_manip << std::setprecision(precision) << value;
    auto const std_out { oss.str() };
    auto const fmt_out { fmt::format(fmt_str, value, precision) };    
    std::cout << std_out.size() << '\n' << std_out << '\n'
              << fmt_out.size() << '\n' << fmt_out << '\n';
}

int main()
{
    auto const precision{ 1074 };
    auto const denorm_min{ -0x0.0000000000001p-1022 };

    // This is fine
    test(denorm_min, "{:.{}g}", std::defaultfloat, precision);
    
    // Here {fmt} stops at 770 chars
    test(denorm_min, "{:.{}f}", std::fixed, precision);  
}

According to the {fmt} library's documentation:

The precision is a decimal number indicating how many digits should be displayed after the decimal point for a floating-point value formatted with 'f' and 'F', or before and after the decimal point for a floating-point value formatted with 'g' or 'G'.

Is there a limit to this value?

In the corner case I've posted, std::setprecision seems to be able to output all of the requested digits, while {fmt} seems to stop at 770 (a "reasonably" big enough value in most cases, to be fair). Is there a parameter we can set to modify this limit?

EDIT

I reported the issue to the library mantainers and it appears to have been fixed, now.


(1) If you are wondering where those particular values come from, I was playing with this Q&A:
What is the maximum length in chars needed to represent any double value?

Bob__
  • 12,361
  • 3
  • 28
  • 42

2 Answers2

4

Precision can be any value less than max int.

What you observed was a now fixed bug in handling very large precision in fixed floating-point format: https://github.com/fmtlib/fmt/issues/2616 (thanks for reporting it).

767 is not a precision limit but the maximum number of significant digits IEEE754 double can have (the rest will be zeros): https://www.exploringbinary.com/maximum-number-of-decimal-digits-in-binary-floating-point-numbers/.

vitaut
  • 49,672
  • 25
  • 199
  • 336
  • That was fast, thank you very much. I eventually figured out the meaning of that number, but again, thanks for the reference. If I may, am I correct assuming that there isn't a format specifier that would automatically output *all* the significant digits (including the leading zeroes) of the decimal representation, but *without* the trailing zeroes (which are outputted using a big enough precision value)? – Bob__ Nov 27 '21 at 17:40
  • Not for fixed precision but the default format will give you enough significant digits for round trip (the rest is mostly meaningless). – vitaut Nov 27 '21 at 18:14
3

You're not far off, there is a hardcoded limit of 767 in the format-inl.h file (see here):

// Limit precision to the maximum possible number of significant digits in
// an IEEE754 double because we don't need to generate zeros.
const int max_double_digits = 767;
if (precision > max_double_digits) precision = max_double_digits;
Dominic Price
  • 1,111
  • 8
  • 19
  • I see, thanks. The extra 3 chars are the leading `"-0."`, in my example. Not sure where they get that "magic" number of maximum significant digits from, though. – Bob__ Nov 26 '21 at 10:51
  • If you believe that the value should be higher (or modifiable), then can add a bug to the issue tracker on their GitHub or submit a pull request, the maintainers should be able to tell you where the number comes from – Dominic Price Nov 26 '21 at 10:54