1

Anybody can explain or show how is the function "sin" (or "sinf", "sinl") realized in C. Intuition suggests that it should be somewhere in the math.h but I did not see anything there

Oleg-K
  • 19
  • 2
  • What do you mean by “realized”? And the contents of math.h differ on different compilation platforms, so there is no generic answer. It may typically include OS- or hardware-dependent headers and good luck following the chain of #includes down to what you are looking for. – Pascal Cuoq May 11 '13 at 20:08
  • 1
    In `math.h` you can see only declarations (e.h. signatures) of the functions, not their definitions (e.g. functions bodies). Check out the `glibc` sources where you can find the implementation. And yes, `sin` uses fast algorithm (not naive, maths skills needed) – maverik May 11 '13 at 20:08
  • http://en.wikipedia.org/wiki/Taylor_series – Dmitry Zagorulkin May 11 '13 at 20:09
  • 1
    Note that the compiler may use the CPU's instruction (when available) instead of software implementation to compute sin. – nhahtdh May 11 '13 at 20:09
  • There is no implementation -- it's original. – Hot Licks May 11 '13 at 20:10
  • 3
    No, Taylor series aren't usually used. The problem is that the error increases with the argument. One can take a higher-degree Taylor approximation, and approximate the high-order term with a lower-order Chebyshev polynomial approximation. Pade approximants are also possible. Even table lookup and interpolation in some regimes. – Eric Jablow May 11 '13 at 20:12
  • I really don't understand, why this question was closed. I'm also interesting about trigonometric functions computation. And I don't believe that this operation is still so stupid (by Tailor's series or lookup tables). – Eddy_Em May 11 '13 at 20:15
  • @Eddy_Em There is another question asking the exact same thing. Duplicate questions only serve to clutter and divide questions and their answers, so instead it is closed and viewers are directed to the original instance of the question. – Kenogu Labz May 11 '13 at 20:16
  • I recall reading the old Fortran sources for the trig functions. Generally the algorithm was sort of "ad-hoc", first dividing the range into subranges, then dealing with each subrange with a different algorithm - Taylor series here, odd combinations of multiplies and divides there, bits of a look-up table somewhere else. I can only imagine that they've gotten more complex. – Hot Licks May 11 '13 at 20:17
  • @ZagorulkinDmitry http://lolengine.net/blog/2011/12/21/better-function-approximations – Pascal Cuoq May 11 '13 at 20:18
  • @KenoguLabz, I have read that topic slightly before adding my comment. And I didn't find there a really good answer! – Eddy_Em May 11 '13 at 20:47
  • Unfortunately, most of the answers at the duplicate are pretty poor. This is an important subject that is not widely understood, so it deserves a fresh treatment. None of the current six top-voted answers in the duplicate are correct about the implementations in OS X or iOS, for example, and I would not expect them to be correct about other high-quality implementations either. – Eric Postpischil May 11 '13 at 23:06
  • @Oleg-K: Since this questions was unfortunately closed and the duplicate does not contain good answers, here is a very brief explanation for `sin(x)`: We use some prepared constants to calculate a very precise remainder for `x` divided by (in effect) 2π. Then we decide which of several prepared intervals from -π to +π the remainder is in. Then we evaluate a specially prepared polynomial, with some extra precision. The polynomial is some form of minimax polynomial designed to approximate sine with minimum error. [Here](http://edp.org/work/index.html) is some open source code for `sinf`. – Eric Postpischil May 11 '13 at 23:12

1 Answers1

1

There's a couple ways I can think of right off the bat:

  1. Lookup tables
  2. Approximation via Taylor series (which can be easily made accurate to a number of significant digits).
Kenogu Labz
  • 1,094
  • 1
  • 9
  • 20
  • For full function approximation, see Fourier Analysis. – Kenogu Labz May 11 '13 at 20:29
  • And thanks for the link. It does make me curious, though, whether you could simply use two series: one locally around 0 and one locally around PI/2? You could then simply split up your domain into regions that should be calculated by function A and regions that should be calculated by function B. I'm probably forgetting some fundamental aspect of Taylor Series, though. – Kenogu Labz May 11 '13 at 20:34
  • 1
    You can split the domain in as many intervals as you like, that is indeed one of the techniques to trade space (these polynomial coefficients need to be stored somewhere) and time (selecting the right interval and loading up the coefficients with cache misses) for accuracy, but on any single interval, coefficients computed with the Remez algorithm will still out-perform Taylor expansion. So that's what is used. – Pascal Cuoq May 11 '13 at 20:42