Consider the following snippet1 (which is testable here):
#include <fmt/core.h>
#include <iomanip>
#include <iostream>
#include <sstream>
#include <string>
// Let's see how many digits we can print
void test(auto value, char const* fmt_str, auto std_manip, int precision)
{
std::ostringstream oss;
oss << std_manip << std::setprecision(precision) << value;
auto const std_out { oss.str() };
auto const fmt_out { fmt::format(fmt_str, value, precision) };
std::cout << std_out.size() << '\n' << std_out << '\n'
<< fmt_out.size() << '\n' << fmt_out << '\n';
}
int main()
{
auto const precision{ 1074 };
auto const denorm_min{ -0x0.0000000000001p-1022 };
// This is fine
test(denorm_min, "{:.{}g}", std::defaultfloat, precision);
// Here {fmt} stops at 770 chars
test(denorm_min, "{:.{}f}", std::fixed, precision);
}
According to the {fmt}
library's documentation:
The precision is a decimal number indicating how many digits should be displayed after the decimal point for a floating-point value formatted with
'f'
and'F'
, or before and after the decimal point for a floating-point value formatted with'g'
or'G'
.
Is there a limit to this value?
In the corner case I've posted, std::setprecision
seems to be able to output all of the
requested digits, while {fmt}
seems to stop at 770 (a "reasonably" big enough value in most cases, to be fair). Is there a parameter we can set to modify this limit?
(1) If you are wondering where those particular values come from, I was playing with this Q&A:
What is the maximum length in chars needed to represent any double value?
CodePudding user response:
You're not far off, there is a hardcoded limit of 767 in the format-inl.h
file (see here):
// Limit precision to the maximum possible number of significant digits in
// an IEEE754 double because we don't need to generate zeros.
const int max_double_digits = 767;
if (precision > max_double_digits) precision = max_double_digits;