3

I have been trying to create custom calculator for calculating trigonometric functions. Aside from Chebyshev pylonomials and/or Cordic algorithm I have used Taylor series which have been accurate by few places of decimal.

This is what i have created to calculate simple trigonometric functions without any modules:

from __future__ import division

def sqrt(n):
  ans = n ** 0.5
  return ans

def factorial(n):
  k = 1
  for i in range(1, n+1):
    k = i * k

  return k 

def sin(d):
  pi = 3.14159265359
  n = 180 / int(d) # 180 degrees = pi radians
  x = pi / n # Converting degrees to radians
  ans = x - ( x ** 3 / factorial(3) ) + ( x ** 5 / factorial(5) ) - ( x ** 7 / factorial(7) ) + ( x ** 9 / factorial(9) )
  return ans 

def cos(d):
  pi = 3.14159265359
  n = 180 / int(d) 
  x = pi / n 
  ans = 1 - ( x ** 2 / factorial(2) ) + ( x ** 4 / factorial(4) ) - ( x ** 6 / factorial(6) ) + ( x ** 8 / factorial(8) )
  return ans 

def tan(d): 
  ans = sin(d) / sqrt(1 - sin(d) ** 2)
  return ans 

Unfortunately i could not find any sources that would help me interpret inverse trigonometric function formulas for Python. I have also tried putting sin(x) to the power of -1 (sin(x) ** -1) which didn't work as expected.

What could be the best solution to do this in Python (In the best, I mean simplest with similar accuracy as Taylor series)? Is this possible with power series or do i need to use cordic algorithm?

ShellRox
  • 2,532
  • 6
  • 42
  • 90
  • 1
    What are your criteria for "best"? Do you need speed? Accuracy? Simplicity? Correct rounding? Something that works for a particular fixed precision (e.g., for Python `float`s), or something that works for arbitrary precision? As it stands, this question is too broad to give a good answer. E.g., for arctan, you might consider power series, argument reduction, computation via complex logarithm, Newton's method to compute the inverse to the existing tan, Chebyshev approxmation, some combination of these things, .... BTW, what's the rationale for avoiding the `math` module here? – Mark Dickinson May 29 '17 at 19:22
  • @MarkDickinson Apologies for not specifying my intentions. "Best" in this case would be simplest one with proper accuracy. For example similar approximation that Taylor's series gave when using it for sine and cosine. I assume simplest method would be power series again? If it is possible for approximating answer of inverse trigonometric functions. – ShellRox May 29 '17 at 19:27
  • You might find https://stackoverflow.com/questions/23047978/how-is-arctan-implemented/23049175#23049175 helpful. – Mark Dickinson May 29 '17 at 19:27
  • 2
    The -1 exponent on sin, etc. for performing arcsine et alia is purely symbolic. *Actually* putting something to the power of -1 takes its reciprocal. – Ignacio Vazquez-Abrams May 29 '17 at 19:30
  • @MarkDickinson Approximating result of arctan(x) would also support me to finding arcsin and arcos correct? Also am simply tying to learn how exactly are trigonometric functions calculated with the support of Python. – ShellRox May 29 '17 at 19:31
  • 2
    https://stackoverflow.com/questions/7378187/approximating-inverse-trigonometric-functions – Dadep May 29 '17 at 19:31
  • 1
    @ShellRox: Yes: it's not hard to express `arcsin` and `arccos` in terms of `arctan`, so `arctan` is a good first target. A combination of argument reduction (`arctan x = 2 arctan(x / (1 + sqrt(1+x^2)))`) and the usual [Taylor series](https://en.wikipedia.org/wiki/Taylor_series#Trigonometric_functions) about 0 is probably your best bet for something simple, but there are lots of subtleties to take care of. – Mark Dickinson May 29 '17 at 19:45
  • This is not a good implementation. The repeated calls to factorial are not a good idea. – duffymo May 29 '17 at 21:33
  • @duffymo Hello, Could it cause any problems? Am i using too much loops? – ShellRox May 30 '17 at 04:00
  • 3
    @ShellRox: It's simpler (and faster) to build each term in the Taylor series from the previous term. E.g., for sin, you'd get the 3rd term from the 2nd via: `x**5/5! = x**3/3! * x**2 / (4*5)`. That way you don't even need the factorial function. – Mark Dickinson May 30 '17 at 07:34
  • 3
    Taylor series are only good for small values of x. Most of the implementations use some form of argument reduction to try and get the input into a pretty tight range before actually using some form of series. Its worth having a look at some of the open source maths libraries [fdlibm](http://www.netlib.org/fdlibm/) is quite nicely documented and readable. – Salix alba May 30 '17 at 08:25
  • @MarkDickinson Thanks, I'm definitely going to use it later. – ShellRox May 30 '17 at 14:15
  • @Salixalba Thanks a lot, I have never worked in C but source codes are detailed enough to be understandable. – ShellRox May 30 '17 at 14:17
  • @Spektre Thanks a lot, so as i understand i need to use monotonic function on x and then use sequence for binary search algorithm, correct? – ShellRox May 30 '17 at 14:21
  • 1
    @ShellRox comment moved into answer and added `y=acos(x)` C++ implementation example using only bin search on floats and `x=cos(y)` – Spektre May 31 '17 at 07:22

2 Answers2

3

The question is broad in scope, but here are some simple ideas (and code!) that might serve as a starting point for computing arctan. First, the good old Taylor series. For simplicity, we use a fixed number of terms; in practice, you might want to decide the number of terms to use dynamically based on the size of x, or introduce some kind of convergence criterion. With a fixed number of terms, we can evaluate efficiently using something akin to Horner's scheme.

def arctan_taylor(x, terms=9):
    """
    Compute arctan for small x via Taylor polynomials.

    Uses a fixed number of terms. The default of 9 should give good results for
    abs(x) < 0.1. Results will become poorer as abs(x) increases, becoming
    unusable as abs(x) approaches 1.0 (the radius of convergence of the
    series).
    """
    # Uses Horner's method for evaluation.
    t = 0.0
    for n in range(2*terms-1, 0, -2):
        t = 1.0/n - x*x*t
    return x * t

The above code gives good results for small x (say smaller than 0.1 in absolute value), but the accuracy drops off as x becomes larger, and for abs(x) > 1.0, the series never converges, no matter how many terms (or how much extra precision) we throw at it. So we need a better way to compute for larger x. One solution is to use argument reduction, via the identity arctan(x) = 2 * arctan(x / (1 + sqrt(1 + x^2))). This gives the following code, which builds on arctan_taylor to give reasonable results for a wide range of x (but beware possible overflow and underflow when computing x*x).

import math

def arctan_taylor_with_reduction(x, terms=9, threshold=0.1):
    """
    Compute arctan via argument reduction and Taylor series.

    Applies reduction steps until x is below `threshold`,
    then uses Taylor series.
    """
    reductions = 0
    while abs(x) > threshold:
        x = x / (1 + math.sqrt(1 + x*x))
        reductions += 1

    return arctan_taylor(x, terms=terms) * 2**reductions

Alternatively, given an existing implementation for tan, you could simply find a solution y to the equation tan(y) = x using traditional root-finding methods. Since arctan is already naturally bounded to lie in the interval (-pi/2, pi/2), bisection search works well:

def arctan_from_tan(x, tolerance=1e-15):
    """
    Compute arctan as the inverse of tan, via bisection search. This assumes
    that you already have a high quality tan function.
    """
    low, high = -0.5 * math.pi, 0.5 * math.pi
    while high - low > tolerance:
        mid = 0.5 * (low + high)
        if math.tan(mid) < x:
            low = mid
        else:
            high = mid
    return 0.5 * (low + high)

Finally, just for fun, here's a CORDIC-like implementation, which is really more appropriate for a low-level implementation than for Python. The idea here is that you precompute, once and for all, a table of arctan values for 1, 1/2, 1/4, etc., and then use those to compute general arctan values, essentially by computing successive approximations to the true angle. The remarkable part is that, after the precomputation step, the arctan computation involves only additions, subtractions, and multiplications by by powers of 2. (Of course, those multiplications aren't any more efficient than any other multiplication at the level of Python, but closer to the hardware, this could potentially make a big difference.)

cordic_table_size = 60
cordic_table = [(2**-i, math.atan(2**-i))
                 for i in range(cordic_table_size)]

def arctan_cordic(y, x=1.0):
    """
    Compute arctan(y/x), assuming x positive, via CORDIC-like method.
    """
    r = 0.0
    for t, a in cordic_table:
        if y < 0:
            r, x, y = r - a, x - t*y, y + t*x
        else:
            r, x, y = r + a, x + t*y, y - t*x
    return r

Each of the above methods has its strengths and weaknesses, and all of the above code can be improved in a myriad of ways. I encourage you to experiment and explore.

To wrap it all up, here are the results of calling the above functions on a small number of not-very-carefully-chosen test values, comparing with the output of the standard library math.atan function:

test_values = [2.314, 0.0123, -0.56, 168.9]
for value in test_values:
    print("{:20.15g} {:20.15g} {:20.15g} {:20.15g}".format(
        math.atan(value),
        arctan_taylor_with_reduction(value),
        arctan_from_tan(value),
        arctan_cordic(value),
    ))

Output on my machine:

    1.16288340166519     1.16288340166519     1.16288340166519     1.16288340166519
     0.0122993797673      0.0122993797673   0.0122993797673002   0.0122993797672999
  -0.510488321916776   -0.510488321916776   -0.510488321916776   -0.510488321916776
    1.56487573286064     1.56487573286064     1.56487573286064     1.56487573286064
Mark Dickinson
  • 29,088
  • 9
  • 83
  • 120
  • Thanks a lot for the answer, Really helped me out to find out there are so many ways in Python to calculate arctan! I couldn't find it anywhere on web though. I think CORDIC seems to be most accurate way, But my intentions were to properly understand calculations of trigonometric and inverse trigonometric functions, So accuracy and speed has no matter for me in this case. Now i know most of it's general concept i think. – ShellRox May 31 '17 at 13:25
1

The simplest way to do any inverse function is to use binary search.

  1. definitions

    let assume function

    x = g(y)
    

    And we want to code its inverse:

    y = f(x) = f(g(y))
    x = <x0,x1>
    y = <y0,y1>
    
  2. bin search on floats

    You can do it on integer math accessing mantissa bits like in here:

    but if you do not know the exponent of the result prior to computation then you need to use floats for bin search too.

    so the idea behind binary search is to change mantissa of y from y1 to y0 bit by bit from MSB to LSB. Then call direct function g(y) and if the result cross x revert the last bit change.

    In case of using floats you can use variable that will hold approximate value of the mantissa bit targeted instead of integer bit access. That will eliminate unknown exponent problem. So at the beginning set y = y0 and actual bit to MSB value so b=(y1-y0)/2. After each iteration halve it and do as many iterations as you got mantissa bits n... This way you obtain result in n iterations within (y1-y0)/2^n accuracy.

    If your inverse function is not monotonic break it into monotonic intervals and handle each as separate binary search.

    The function increasing/decreasing just determine the crossing condition direction (use of < or >).

C++ acos example

acos

so y = acos(x) is defined on x = <-1,+1> , y = <0,M_PI> and decreasing so:

double f64_acos(double x)
    {
    const int n=52;         // mantisa bits
    double y,y0,b;
    int i;
    // handle domain error
    if (x<-1.0) return 0;
    if (x>+1.0) return 0;
    // x = <-1,+1> , y = <0,M_PI> , decreasing
    for (y= 0.0,b=0.5*M_PI,i=0;i<n;i++,b*=0.5)  // y is min, b is half of max and halving each iteration
        {
        y0=y;                   // remember original y
        y+=b;                   // try set "bit"
        if (cos(y)<x) y=y0;     // if result cross x return to original y decreasing is <  and increasing is >
        }
    return y;
    }

I tested it like this:

double x0,x1,y;
for (x0=0.0;x0<M_PI;x0+=M_PI*0.01)  // cycle all angle range <0,M_PI>
    {
    y=cos(x0);              // direct function (from math.h)
    x1=f64_acos(y);         // my inverse function
    if (fabs(x1-x0)>1e-9)   // check result and output to log if error
     Form1->mm_log->Lines->Add(AnsiString().sprintf("acos(%8.3lf) = %8.3lf != %8.3lf",y,x0,x1));
    }

Without any difference found... so the implementation is working correctly. Of coarse binary search on 52 bit mantissa is usually slower then polynomial approximation ... on the other hand the implementation is so simple ...

[Notes]

If you do not want to take care of the monotonic intervals you can try

As you are dealing with goniometric functions you need to handle singularities to avoid NaN or division by zero etc ...

If you're interested here more bin search examples (mostly on integers)

Spektre
  • 49,595
  • 11
  • 110
  • 380
  • Thanks a lot for the answer, The concept is perfectly detailed and binary search really seems to be simplest way to calculate inverse trigonometric functions. Only a little problem i have is my lack of understanding of C language. Example in Python if possible would be much appreciated. – ShellRox May 31 '17 at 13:22
  • @ShellRox I do not code in python at all so I can't help there `if` condition shoul dbe straight forward to understand and `for (bla0 ; bla1; bla2) { bla3 }` will do `bla0` before start of loop, repeat the loop until `bla1` is true in each iteration do `bla3` and after it do `bla2` and test `bla1` again ... constant `M_PI=3.1415...` and `cos()` is cosine function in radians – Spektre May 31 '17 at 14:09