I have a console app with a function that divides integers of a Fibonacci series, demonstrating how the ratio in any Fibonacci series approaches Φ . I have simliar code written in Go and inC++11
. InGo
(or a scientific calculator), the function returns values of int64
and the results show a precision of up to 16 digits in an Ubuntu Terminal Session, for example:
1.6180339937902115
In C++11 I can never see more that 5 digits of precision in the results usingcout
. The results are declared aslong double
in a function like this:
typedef unsigned long long int ULInt;
typedef std::vector< ULInt> ULIntV;
std::vector<long double > CalcSequenceRatio( const ULIntV& fib )
{
std::vector<long double> result;
for ( int i = 0; i != fib.size( ); i ++ )
{
if ( i == ( fib.size( ) - 1 ) )
{
result[i] = 0;
break;
}
long double n = fib[i + 1];
long double n2 = fib[i];
long double q = n / n2;
result.push_back( q );
}
return result;
}
Although the vectorfib
passed into CalcSequenceRatio( const ULIntV& fib )
contains over 100 entries, after 16 entries, all values in the result set are displayed as
1.61803
The rest of the value is being rounded although in Go (or in a calculator), I can see that the actual values are extended to at least 16 digits of precision.
How can I make CalcSequenceRatio()
return more precise values? Is there is problem because going from long long int
to long double
is a downcast? Do I need to pass the fib series as vector<long double>
? What's wrong?
Edit:
This question has been marked a duplicate, but this is not really correct, because the question does not deal directly with cout
: There are other factors that might have made a difference, although the analysis proves that cout
is the problem. I posted the correct answer:
The problem is with cout, and here is the solution... as explained in the other question...