1

I would like to explore numeric accuracy of a few different implementations of my algorithm (which works using standard double precision arithmetic). Unfortunately in many cases I don't know the correct result in closed form, so I need to find a way to calculate a benchmark result using some very high precision computations.

This is a fun project, so my constraints are: no budget for tools and preferably Linux platform. I know that Mathematica offers automatic error tracking and arbitrary-precision arithmetic, but I don't have the license. Execution speed is not an issue, because these high precision calculations are only going to be used for calculating the benchmarks.

What is the best way to code these high precision computations? I am looking for at least quad precision, but preferably even higher. My only idea so far was to use quad precision floating point types in C++.

Community
  • 1
  • 1
Grzenio
  • 35,875
  • 47
  • 158
  • 240
  • 4
    I am not posting this as an answer as it is link-only, but http://www.mpfr.org offers all the arbitrary-precision floating-point primitives you should need. – Pascal Cuoq Sep 25 '14 at 09:41
  • You can find lots of solutions with this tag [tag:arbitrary-precision] http://stackoverflow.com/questions/6414714/arbitrary-precision-arithmetic-with-gmp http://stackoverflow.com/questions/2568446/the-best-cross-platform-portable-arbitrary-precision-math-library?rq=1 – phuclv Sep 25 '14 at 12:31

1 Answers1

1

I find Julia great for this task. Among the advantages

  • It is very easy to write, and has a convenient REPL or can use the IPython notebook interface.
  • It comes with a native arbitrary-precision BigFloat type, which wraps the MPFR library, and a BigInt which wraps the GMP library.
  • It is very east to "peek under the hood", to see what other methods do, and look at the llvm and native assembly code.
Simon Byrne
  • 7,694
  • 1
  • 26
  • 50