16

This is not a question about template hacks or dealing with compiler quirks. I understand why the Boost libraries are the way they are. This is about the actual algorithm used for the sinc_pi function in the Boost math library.


The function sinc(x) is equivalent to sin(x)/x.

In the documentation for the Boost math library's sinc_pi(), it says "Taylor series are used at the origin to ensure accuracy". This seems nonsensical since division of floating point numbers will not cause any more loss of precision than a multiplication would. Unless there's a bug in a particular implementation of sin, the naive approach of

double sinc(double x) {if(x == 0) return 1; else return sin(x)/x;}

seems like it would be fine.

I've tested this, and the maximum relative difference between the naive version and the one in the Boost math toolkit is only about half the epsilon for the type used, for both float and double, which puts it at the same scale as a discretization error. Furthermore, this maximum difference does not occur near 0, but near the end of the interval where the Boost version uses a partial Taylor series (i.e. abs(x) < epsilon**(1/4)). This makes it look like it is actually the Taylor series approximation which is (very slightly) wrong, either through loss of accuracy near the ends of the interval or through the repeated rounding from multiple operations.

Here are the results of the program I wrote to test this, which iterates through every float between 0 and 1 and calculates the relative difference between the Boost result and the naive one:

Test for type float:

Max deviation from Boost result is 5.96081e-08 relative difference
equals 0.500029 * epsilon
at x = 0.0185723
which is epsilon ** 0.25003

And here is the code for the program. It can be used to perform the same test for any floating-point type, and takes about a minute to run.

#include <cmath>
#include <iostream>

#include "boost/math/special_functions/sinc.hpp"

template <class T>
T sinc_naive(T x) { using namespace std; if (x == 0) return 1; else return sin(x) / x; }



template <class T>
void run_sinc_test()
{
    using namespace std;

    T eps = std::numeric_limits<T>::epsilon();

    T max_rel_err = 0;
    T x_at_max_rel_err = 0;

    for (T x = 0; x < 1; x = nextafter(static_cast<float>(x), 1.0f))
    {
        T boost_result = boost::math::sinc_pi(x);
        T naive_result = sinc_naive(x);

        if (boost_result != naive_result)
        {
            T rel_err = abs(boost_result - naive_result) / boost_result;

            if (rel_err > max_rel_err)
            {
                max_rel_err = rel_err;
                x_at_max_rel_err = x;
            }
        }
    }

    cout << "Max deviation from Boost result is " << max_rel_err << " relative difference" << endl;
    cout << "equals " << max_rel_err / eps << " * epsilon" << endl;
    cout << "at x = " << x_at_max_rel_err << endl;
    cout << "which is epsilon ** " << log(x_at_max_rel_err) / log(eps) << endl;
    cout << endl;
}



int main()
{
    using namespace std;

    cout << "Test for type float:" << endl << endl;
    run_sinc_test<float>();
    cout << endl;
    cin.ignore();
}
  • 1
    What about the speed of the two functions? Floating point multiplication is significantly faster than floating point division, at least on modern x86 processors. – Jack Meagher Nov 10 '17 at 05:05
  • How many different platforms are supported by boost? Do they all support `sin` in hardware? Or if x is a complex number, quaternions, or octonions (as mentioned in the documentation)? – 1201ProgramAlarm Nov 10 '17 at 05:20
  • Thank you. I always wondered what I was missing and why the approximation would be necessary. My thinking: the difference between numerator and denominator shrinks as x^2, and thus they become numerically equal far from the singularity (at ~sqrt(eps)). Since 1/(1+eps) = 1-eps + O(eps^2) the ratio becomes the same as quickly as the values themselves, and so at best very little precision can be recovered over the limited range where the significant digits begin to drop out of the lowest bits. PS: it occurred to me that I should think about denormals.First impression is that they have no impact. – tobi_s Jun 10 '19 at 06:54

1 Answers1

1

After some sleuthing, I dug up a discussion from the original authors.

[sin(x)] is well behaved at x=0, and so is sinc(x). […] my solution will have better performance or small argument, i.e.|x| < pow(x, 1/6), since most processor need much more time to evaluate sin(x) than 1- (1/6) * x *x.

From https://lists.boost.org/Archives/boost/2001/05/12421.php.

The earliest reference I found to using Taylor expansion to ensure accuracy is from much later, and committed by a different person. So it seems like this is about performance, not accuracy. If you want to make sure, you might want to get in touch with the people involved.


Regarding sinc_pi specifically, I found the following exchange. Note that they use sinc_a to refer to the family of functions of the form sin(x*a)/(x*a).

What is the advatage of sinc_a(x) ? To address rounding problems for very large x ? Then it would be more important to improve sin(x) for very large arguments.

The main interest of this particular member of the family is that it requires fewer computations, and that, in itself it is a special function as it is far more common than its brethren.

From https://lists.boost.org/Archives/boost/2001/05/12485.php.

qsantos
  • 1,723
  • 1
  • 12
  • 13