0

If you have an arbitrary precision decimal library, but it doesn't come with any basic (e.g. the exponential function, the sine function), or special (e.g. the error function, the logarithmic integral) mathematical functions, what is the best way to implement them?

Obviously, as the value of most of these functions is irrational, it only makes sense to give the answer to a specified precision. So supposing I want to calculate erf(x) (for instance) to 50 decimal places, what is the best way to do it?

The best answer I have is mapping the argument to some suitable range, and then using the Taylor series of the function to get an answer that converges (hopefully) reasonably quickly. We can use something like Taylor's theorem to bound the error term, but usually, this involves comparing factorials to powers of 10 (for example, see the example under "Taylor's theorem in one real variable" on the Wiki page for Taylor's Theorem), which, whilst doable, seems long-winded.

Also, whilst implementing these functions seems feasible, how would one handle the precision when dealing with the composition of such functions? For instance, if we wanted to calculate 1000*exp(sqrt(2)) to n decimal places, it's not immediately obvious to what level of precision we should calculate the intermediate results to get an accurate final answer.

Does anyone know where I might begin with this problem?

MadMonty
  • 807
  • 2
  • 7
  • 15
  • I think it does not make sense at all to implement this heavy stuff yourself. If possible, you should stick to well-tested software like [mpfr](http://www.mpfr.org/mpfr-current/mpfr.html#Special-Functions) which seems to support a lot of these functions. Even if your programming-language in use is not C++, i would prefer to wrap this library instead of writing my own (most of the time). There also some wrappers already existing for some languages. – sascha Jun 03 '16 at 14:40
  • @sascha I completely agree. If I wanted to write something this afternoon with the desired capability, I'd use something like mpfr. But I'm interested in how one might implement something like that in the first place. Additionally, mpfr can't keep track of the precision the whole way through the computation (i.e. when intermediate values are composed, added or multiplied), so that's still an aspect you'd have to worry about yourself. – MadMonty Jun 03 '16 at 14:45
  • 3
    Implementing to 50 decimal places and to 1000 places would be two very different things. In the first case, you'd probably use extended floating-point arithmetic, and in the second arbitrary-precision arithmetic, including fast multiplies à la Karatsuba. The book by Gonnet & Baeza-Yates gives interesting insight on efficient solutions. http://users.dcc.uchile.cl/~rbaeza/handbook/hbook.html (the link gives only very short abstracts which don't do justice to it). –  Jun 03 '16 at 16:40
  • 1
    This is a big area; it would be hard to give a good answer in SO format. I'd recommend looking at the "Modern Computer Arithmetic" book by Zimmerman and Brent for some ideas, especially Chapter 4. Taylor series by themselves aren't going to be good enough: you'll want to combine them with argument reduction techniques and a bit of error analysis to figure out what levels of intermediate precision you need. – Mark Dickinson Jun 03 '16 at 19:29
  • 1
    You might also look at the Python source for Python's `decimal` module to see how arbitrary-precision correctly-rounded `log`, `exp`, `sqrt`, etc. are implemented. It's not terribly sophisticated, but it shows how to combine argument reduction with Taylor series and integer arithmetic to give results in reasonable time across the entire domain of each function. Source can be found [here](https://github.com/python/cpython/blob/966b24071af1b320a1c7646d33474eeae057c20f/Lib/_pydecimal.py). Disclaimer: I wrote those functions. – Mark Dickinson Jun 03 '16 at 19:32
  • 1
    ... And if you speak Python, the [`mpmath` sources](http://mpmath.org) would also be worth a look. – Mark Dickinson Jun 03 '16 at 19:33

1 Answers1

1

This topic is very broad and each type of function is very different. The Taylor series is a last resort and usually not usable if you want also speed. There are usually alternatives (Chebyshev,...) but each type of function is different and require different approach for optimal performance/precision. For example:

  • Basic goniometric functions are sometimes solved with CORDIC algorithm
  • pow,exp,log can be solved by log2,exp2
  • log2,exp2 can be solved by basic math (+,-,*,/) and bit (<<,>>,&,|,^) functions with use of sqrt for fraction bits
  • sqrt can be computed by binary search without multiplication

To have a bit better understanding of the difference between functions in arbitrary precision see my quests for:

So you would be much better with asking question about specific function implementation instead of arbitrary approach for any function.

The bitwidth of the result is usually truncated to some reasonable size for the function properties. For example multiplication is summ of the bitwiths of operands, +,- can be 1 bit larger then the biggest operand, etc ...

Do not forget while choosing algorithms that there is a big difference between algorithm base complexity and actual complexity of implementation. Especially on arbitrary precision numbers because even simple thing like addition is not O(1) anymore ...

Community
  • 1
  • 1
Spektre
  • 49,595
  • 11
  • 110
  • 380