37

Are there any definitions of functions like sqrt(), sin(), cos(), tan(), log(), exp() (these from math.h/cmath) available ?

I just wanted to know how do they work.

Pythagoras of Samos
  • 3,051
  • 5
  • 29
  • 51
  • 1
    fdlibm provides implementations of all that stuff, and is open-source, stand-alone, fairly readable. Not the simplest possible implementations, since they're designed to provide decent performance. – Steve Jessop Dec 27 '10 at 19:12
  • possible duplicate of [How does C compute sin() and other math functions?](http://stackoverflow.com/questions/2284860/how-does-c-compute-sin-and-other-math-functions) – Jason Orendorff Dec 28 '10 at 01:49

7 Answers7

67

This is an interesting question, but reading sources of efficient libraries won't get you very far unless you happen to know the method used.

Here are some pointers to help you understand the classical methods. My information is by no means accurate. The following methods are only the classical ones, particular implementations can use other methods.

  • Lookup tables are frequently used
  • Trigonometric functions are often implemented via the CORDIC algorithm (either on the CPU or with a library). Note that usually sine and cosine are computed together, I always wondered why the standard C library doesn't provide a sincos function.
  • Square roots use Newton's method with some clever implementation tricks: you may find somewhere on the web an extract from the Quake source code with a mind boggling 1 / sqrt(x) implementation.
  • Exponential and logarithms use exp(2^n x) = exp(x)^(2^n) and log2(2^n x) = n + log2(x) to have an argument close to zero (to one for log) and use rational function approximation (usually Padé approximants). Note that this exact same trick can get you matrix exponentials and logarithms. According to @Stephen Canon, modern implementations favor Taylor expansion over rational function approximation where division is much slower than multiplication.
  • The other functions can be deduced from these ones. Implementations may provide specialized routines.
  • pow(x, y) = exp(y * log(x)), so pow is not to be used when y is an integer
  • hypot(x, y) = abs(x) sqrt(1 + (y/x)^2) if x > y (hypot(y, x) otherwise) to avoid overflow. atan2 is computed with a call to sincos and a little logic. These functions are the building blocks for complex arithmetic.
  • For other transcendental functions (gamma, erf, bessel, ...), please consult the excellent book Numerical Recipes, 3rd edition for some ideas. The good'old Abramowitz & Stegun is also useful. There is a new version at http://dlmf.nist.gov/.
  • Techniques like Chebyshev approximation, continued fraction expansion (actually related to Padé approximants) or power series economization are used in more complex functions (if you happen to read source code for erf, bessel or gamma for instance). I doubt they have a real use in bare-metal simple math functions, but who knows. Consult Numerical Recipes for an overview.
ArekBulski
  • 4,520
  • 4
  • 39
  • 61
Alexandre C.
  • 55,948
  • 11
  • 128
  • 197
  • 11
    +1 for actually explaining the math. I felt a lot better when I realized trig functions were just truncated Taylor series expansions. Otherwise the approximations look like serious magic! – Ben Jackson Dec 27 '10 at 20:27
  • 3
    @Ben: good libraries typically do not use truncated taylor series -- other polynomial approximations (Minimax, Chebyshev, Padé) have much more desirable error characteristics and allow one to get the same accuracy with fewer arithmetic operations. – Stephen Canon Dec 27 '10 at 23:45
  • 2
    @Alexandre: rational approximation has mostly gone out of fashion for `exp` and `log` because multiplication and addition are so much faster than division on modern hardware. It is still used for stiffer functions, however. – Stephen Canon Dec 27 '10 at 23:46
  • Abramowitz & Stegun is the colloquial name of the NIST publication "Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables" from 1964. Please link to the 2010 edition "NIST Digital Library of Mathematical Functions" freely available online at http://dlmf.nist.gov/ – Janus Dec 28 '10 at 03:48
  • 1
    @Janus "The" Abramowitz is such a classic ... I almost shed a few tears for founding it cited in SO ... – Dr. belisarius Dec 28 '10 at 04:39
  • @belisarius If you haven't already, go and check out the new version I link to. Very nice to always have the authoritative reference available, although I must admit I haven't checked in detail how much is changed. – Janus Dec 28 '10 at 05:19
  • @Janus Of course I did! My comment was just a nostalgic one :) – Dr. belisarius Dec 28 '10 at 05:36
  • @Janus: The NIST dlmf references Abramowitz & Stegun for the numeric tables, or numerical coefficients that you sometimes crucially need (or eg. are too lazy to devise a good scheme to compute them inductively) – Alexandre C. Dec 28 '10 at 17:02
21

Every implementation may be different, but you can check out one implementation from glibc's (the GNU C library) source code.

edit: Google Code Search has been taken offline, so the old link I had goes nowhere.

The sources for glibc's math library are located here:

http://sourceware.org/git/?p=glibc.git;a=tree;f=math;h=3d5233a292f12cd9e9b9c67c3a114c64564d72ab;hb=HEAD

wkl
  • 77,184
  • 16
  • 165
  • 176
7

Have a look at how glibc implements various math functions, full of magic, approximation and assembly.

ismail
  • 46,010
  • 9
  • 86
  • 95
5

Definitely take a look at the fdlibm sources. They're nice because the fdlibm library is self-contained, each function is well-documented with detailed explanations of the mathematics involved, and the code is immensely clear to read.

Daniel Trebbien
  • 38,421
  • 18
  • 121
  • 193
4

Having looked a lot at math code, I would advise against looking at glibc - the code is often quite difficult to follow, and depends a lot on glibc magic. The math lib in FreeBSD is much easier to read, if somehow sometimes slower (but not by much).

For complex functions, the main difficulty is border cases - correct nan/inf/0 handling is already difficult for real functions, but it is a nightmare for complex functions. C99 standard defines many corner cases, some functions have easily 10-20 corner cases. You can look at the annex G of the up to date C99 standard document to get an idea. There is also a difficult with long double, because its format is not standardized - in my experience, you should expect quite a few bugs with long double. Hopefully, the upcoming revised version of IEEE754 with extended precision will improve the situation.

David Cournapeau
  • 78,318
  • 8
  • 63
  • 70
  • Good point about the corner cases. They can easily become bottlenecks in some cases (the implementation of `ldexp` on MSVC renders the function pretty much useless for instance) – Alexandre C. Jan 05 '12 at 19:53
0

Most modern hardware include floating point units that implement these functions very efficiently.

David Heffernan
  • 601,492
  • 42
  • 1,072
  • 1,490
-2

Usage: root(number,root,depth)

Example: root(16,2) == sqrt(16) == 4
Example: root(16,2,2) == sqrt(sqrt(16)) == 2
Example: root(64,3) == 4

Implementation in C#:

double root(double number, double root, double depth = 1f)
{
    return Math.Pow(number, Math.Pow(root, -depth));
}

Usage: Sqrt(Number,depth)

Example: Sqrt(16) == 4
Example: Sqrt(8,2) == sqrt(sqrt(8))

double Sqrt(double number, double depth = 1) return root(number,2,depth);

By: Imk0tter

Imk0tter
  • 7
  • 2
  • Welcome to SO, please refer to this for better presenting your question/answer: https://meta.stackoverflow.com/a/251362/3519504 – Sandeep Kumar May 09 '20 at 22:41