1

I am trying to find the minimax polynomial approximation for sine and cosine using the remez exchange algorithm in MATLAB. The need precision out to 23 bits because I am implementing the sine and cosine functions for IEEE-754 floating point.

Using this link here (refer to pages 8 through 15), the instructions are given for finding the polynomial using Mathematica and Maple, however, I am not sure how to extrapolate these methods for MATLAB.

According to Table 3, I need to use a 5th or 6th order polynomial to get ~23 bits (after the decimal point) of accuracy.

I am planning on first performing a range reduction of all incoming theta to between -pi/4 to +pi/4, then performing the sine or cosine function as necessary (The end goal is to implement exp(i*x) = cos(x) + i*sin(x).

I might be able to follow the instructions of this paper myself, but I don't know how to use the remez function for my purposes here. Also, I don't follow why the author used equation (6) (on page 9), nor do i understand how the equation for k (on page 11) was determined (where does 2796201 come from?) and why does the defining form of the polynomial we want to end up with change to sin9x) = x + kx^3 + x^5*P(x^2).

Would it be better to use the firpm function instead (since remez is deprecated)?

Thank you, all help and guidance are greatly appreciated, as well as edits to ensure the best answer possible to my question.

Kyle Jones
  • 5,492
  • 1
  • 21
  • 30
Veridian
  • 3,531
  • 12
  • 46
  • 80
  • Note that the linked paper already gives a minimax approximation for sin on [0,pi/4] in the section "Optimizing for Floating Point", so you just need to find an approximation for cos on [0,pi/4] and you'll be all set. – njuffa Feb 16 '12 at 01:43
  • Well how do you propose I do that? – Veridian Feb 16 '12 at 01:56
  • You got at least one good pointer to a single-precision minimax approximation for cosine on [0,pi/4] in the answers to your previous question: http://stackoverflow.com/questions/9265865/cosine-in-floating-point – njuffa Feb 16 '12 at 20:55

1 Answers1

6

I'd not bother trying to develop approximations of your own. Simpler is to pick up a copy of "Computer Approximations", Hart, et al. A good university library should have it. 23 bits is about 7 decimal digits, so just pick an approximation that gives you the accuracy you need. You can choose either a simple polynomial approximation, or use a rational polynomial, usually a bit better as long as you can tolerate the divide.

Range reduction does make sense, in fact, I chose the same range (+/-pi/4) in my own tools because this choice of range is particularly easy to work with.

Edit: (An example of the use of the approximations one can find in Hart.)

Here I'll find an approximation for sin(x), where x lies in the interval [0,pi/4]. My goal will be to choose an approximation with absolute accuracy of at least 1.e-7 over that interval. Of course, if you have a negative value for x, we know that sin(x) is an odd function, so this is trivial.

I note that the approximations in Hart tend to be of the form sin(alphapix), where x lies in the interval [0,1]. If I then choose an approximation for alpha = 1/2, I'd get an approximation that is valid over the chosen interval. So for an approximation over the interval [0,pi/4] we look for alpha = 1/4.

Next, I look for an approximation that is indicated to have absolute accuracy of at least 7 digits or so, and I'll prefer to use rational polynomial approximations, since they tend to be a bit more efficient. Scanning down the tables on page 118 (my copy of Hart is from 1978) I find an approximation with alpha = 1/4 that fits the bill: index 3060.

This approximation will be of the form

sin(alpha*pi*x) = x*P(x^2)/Q(x^2)

So now I tab over to the page that gives the coefficients for SIN 3060. In my copy, it falls on pages 199-200. There are 5 coefficients, P00, P01, P02, Q00, Q01. (Watch out for the somewhat non-standard scientific notation used here.) Thus P (the numerator polynomial) has 3 terms in it, while Q, the denominator has 2 terms. Writing it out, I get this:

sin(alpha*pi*x) = (52.81860134812 -4.644800481954*x^3 + 0.0867545069521*x^5)/ ...
    (67.250731777791 + x^2)

Lets try it now in MATLAB.

x = linspace(0,pi/4,10001);
xt = x*4/pi; % transform to [0,1]
sine = @(x) (52.81860134812*x -4.644800481954*x.^3 + ...
     0.0867545069521*x.^5)./(67.250731777791 + x.^2);

max(abs(sin(x) -sine(xt)))
ans =
   1.6424e-09

plot(x,sin(x)- sine(xt),'-')

Error for the sin approximation over [0,pi/4]

Note the 1e-9 attached to the y-axis.

It looks like this is the most reasonable choice of approximation for sin(x) over that specific interval, although this gives about 29 bits of accuracy instead of the 23 bits you asked for. If you are willing to choose a different range reduction interval, there are a few choices that might save a term, possibly at a cost of a few bits that you don't need.

log2(max(abs(sin(x) -sine(xt))))
ans =
      -29.182
  • 1
    For a numbers guy, Hart is the sweetest book you will ever pick up. It has many tips for range reduction and other tricks too. I still smile every time I pick up my copy, though it is a bit out of date in the sense that nobody seems to care about these things anymore. After all, every language usually has a complete set of basic functions, trig, exponential, etc. –  Feb 15 '12 at 23:35
  • I hope the edit will suffice. As I said, Hart is a neat book, if you are a gearhead like me. –  Feb 16 '12 at 00:54
  • It is simply a different approximation. Which one is better depends on the cost of a divide for you. Of course, Hart also offers simple polynomial approximations, so nothing stops you from use of one of them. The real point is to not recreate that which is easily looked up, unless it is something you want to do for your own education. –  Feb 16 '12 at 15:01
  • I am getting horrible error. I tried your method using a polynomial approximation (index 3040). Here is the code and result: x = linspace(0,pi/4,10001); xt = x*4/pi; % transform to [0,1] sine = @(x)(0.785398160854 - 0.080745432524 .* xt.^3 + 0.002490001007 .* xt.^5 - 0.000035950439 .* xt.^7); z = sin(x); max(abs(sine(xt)-sin(x))) ans = 0.785398160854 – Veridian Feb 16 '12 at 22:05
  • Oh, do you see the difference in what I just did? What a difference two characters can make. x = linspace(0,pi/4,10001); xt = x*4/pi; sine = @(x)(0.785398160854*x - 0.080745432524 .* xt.^3 + 0.002490001007 .* xt.^5 - 0.000035950439 .* xt.^7); z = sin(x); max(abs(sine(xt)-sin(x))) ans = 2.2885e-09 >> –  Feb 16 '12 at 23:12
  • Ahh, thank you very much. So, each input value has to be multiplied by 4/pi first? And also has to be in the range of -pi/4 to +pi/4? – Veridian Feb 17 '12 at 00:13
  • how does this approximation compare to using CORDIC? – Veridian Mar 03 '12 at 21:14